| node3 | 0.000ns | 2025-12-23 14:20:51.510 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 88.000ms | 2025-12-23 14:20:51.598 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 104.000ms | 2025-12-23 14:20:51.614 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node3 | 213.000ms | 2025-12-23 14:20:51.723 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [3] are set to run locally | |
| node3 | 239.000ms | 2025-12-23 14:20:51.749 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node2 | 887.000ms | 2025-12-23 14:20:52.397 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node0 | 934.000ms | 2025-12-23 14:20:52.444 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node2 | 979.000ms | 2025-12-23 14:20:52.489 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node2 | 995.000ms | 2025-12-23 14:20:52.505 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 1.027s | 2025-12-23 14:20:52.537 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node0 | 1.046s | 2025-12-23 14:20:52.556 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 1.107s | 2025-12-23 14:20:52.617 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [2] are set to run locally | |
| node2 | 1.134s | 2025-12-23 14:20:52.644 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node0 | 1.172s | 2025-12-23 14:20:52.682 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [0] are set to run locally | |
| node0 | 1.202s | 2025-12-23 14:20:52.712 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node3 | 1.379s | 2025-12-23 14:20:52.889 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1138ms | |
| node3 | 1.389s | 2025-12-23 14:20:52.899 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node3 | 1.392s | 2025-12-23 14:20:52.902 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node3 | 1.430s | 2025-12-23 14:20:52.940 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node3 | 1.488s | 2025-12-23 14:20:52.998 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node3 | 1.489s | 2025-12-23 14:20:52.999 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 2.310s | 2025-12-23 14:20:53.820 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node1 | 2.340s | 2025-12-23 14:20:53.850 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 2.399s | 2025-12-23 14:20:53.909 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 2.402s | 2025-12-23 14:20:53.912 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node1 | 2.434s | 2025-12-23 14:20:53.944 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node3 | 2.436s | 2025-12-23 14:20:53.946 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node1 | 2.451s | 2025-12-23 14:20:53.961 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node2 | 2.567s | 2025-12-23 14:20:54.077 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1433ms | |
| node1 | 2.573s | 2025-12-23 14:20:54.083 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [1] are set to run locally | |
| node2 | 2.578s | 2025-12-23 14:20:54.088 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node2 | 2.582s | 2025-12-23 14:20:54.092 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node0 | 2.584s | 2025-12-23 14:20:54.094 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1380ms | |
| node0 | 2.592s | 2025-12-23 14:20:54.102 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node0 | 2.595s | 2025-12-23 14:20:54.105 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 2.602s | 2025-12-23 14:20:54.112 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node2 | 2.624s | 2025-12-23 14:20:54.134 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node0 | 2.633s | 2025-12-23 14:20:54.143 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node0 | 2.693s | 2025-12-23 14:20:54.203 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node0 | 2.694s | 2025-12-23 14:20:54.204 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node2 | 2.694s | 2025-12-23 14:20:54.204 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node2 | 2.695s | 2025-12-23 14:20:54.205 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 3.180s | 2025-12-23 14:20:54.690 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 3.181s | 2025-12-23 14:20:54.691 | 26 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node3 | 3.187s | 2025-12-23 14:20:54.697 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node3 | 3.195s | 2025-12-23 14:20:54.705 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node3 | 3.198s | 2025-12-23 14:20:54.708 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 3.524s | 2025-12-23 14:20:55.034 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node0 | 3.538s | 2025-12-23 14:20:55.048 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node2 | 3.621s | 2025-12-23 14:20:55.131 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 3.624s | 2025-12-23 14:20:55.134 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node0 | 3.629s | 2025-12-23 14:20:55.139 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 3.631s | 2025-12-23 14:20:55.141 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node2 | 3.662s | 2025-12-23 14:20:55.172 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 3.667s | 2025-12-23 14:20:55.177 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node1 | 4.207s | 2025-12-23 14:20:55.717 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1604ms | |
| node1 | 4.217s | 2025-12-23 14:20:55.727 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node1 | 4.220s | 2025-12-23 14:20:55.730 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node1 | 4.260s | 2025-12-23 14:20:55.770 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 4.288s | 2025-12-23 14:20:55.798 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node3 | 4.302s | 2025-12-23 14:20:55.812 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26203444] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=202150, randomLong=1367522143851303917, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=9420, randomLong=2779907216308157114, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1458650, data=35, exception=null] OS Health Check Report - Complete (took 1024 ms) | |||||||||
| node3 | 4.334s | 2025-12-23 14:20:55.844 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node1 | 4.339s | 2025-12-23 14:20:55.849 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node1 | 4.341s | 2025-12-23 14:20:55.851 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node3 | 4.341s | 2025-12-23 14:20:55.851 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node3 | 4.343s | 2025-12-23 14:20:55.853 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 4.402s | 2025-12-23 14:20:55.912 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node4 | 4.424s | 2025-12-23 14:20:55.934 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node3 | 4.432s | 2025-12-23 14:20:55.942 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "I8GRmQ==", "port": 30124 }, { "ipAddressV4": "CoAAWg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "iHSdeQ==", "port": 30125 }, { "ipAddressV4": "CoAAXQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "iHeLPg==", "port": 30126 }, { "ipAddressV4": "CoAAXA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkeHcg==", "port": 30127 }, { "ipAddressV4": "CoAAXg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IkSEKg==", "port": 30128 }, { "ipAddressV4": "CoAAWw==", "port": 30128 }] }] } | |||||||||
| node3 | 4.456s | 2025-12-23 14:20:55.966 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/3/ConsistencyTestLog.csv | |
| node3 | 4.456s | 2025-12-23 14:20:55.966 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node3 | 4.471s | 2025-12-23 14:20:55.981 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 2e810ee8e8889a9749af2a6dd636308369065f2d81f872f03dcdfdfa9f248a6a2ef095a6aeb09246c380ec06c280924a (root) VirtualMap state / south-tackle-upper-pudding {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}} | |||||||||
| node3 | 4.474s | 2025-12-23 14:20:55.984 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node0 | 4.499s | 2025-12-23 14:20:56.009 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.501s | 2025-12-23 14:20:56.011 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node2 | 4.507s | 2025-12-23 14:20:56.017 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.508s | 2025-12-23 14:20:56.018 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node2 | 4.509s | 2025-12-23 14:20:56.019 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node2 | 4.517s | 2025-12-23 14:20:56.027 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node0 | 4.520s | 2025-12-23 14:20:56.030 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 4.524s | 2025-12-23 14:20:56.034 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 4.527s | 2025-12-23 14:20:56.037 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node2 | 4.531s | 2025-12-23 14:20:56.041 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 4.577s | 2025-12-23 14:20:56.087 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 4.612s | 2025-12-23 14:20:56.122 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node3 | 4.689s | 2025-12-23 14:20:56.199 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node3 | 4.694s | 2025-12-23 14:20:56.204 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node3 | 4.698s | 2025-12-23 14:20:56.208 | 43 | INFO | STARTUP | <<start-node-3>> | ConsistencyTestingToolMain: | init called in Main for node 3. | |
| node3 | 4.699s | 2025-12-23 14:20:56.209 | 44 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | Starting platform 3 | |
| node3 | 4.700s | 2025-12-23 14:20:56.210 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node3 | 4.704s | 2025-12-23 14:20:56.214 | 46 | INFO | STARTUP | <<start-node-3>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node3 | 4.705s | 2025-12-23 14:20:56.215 | 47 | INFO | STARTUP | <<start-node-3>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node3 | 4.706s | 2025-12-23 14:20:56.216 | 48 | INFO | STARTUP | <<start-node-3>> | InputWireChecks: | All input wires have been bound. | |
| node3 | 4.707s | 2025-12-23 14:20:56.217 | 49 | WARN | STARTUP | <<start-node-3>> | PcesFileTracker: | No preconsensus event files available | |
| node3 | 4.708s | 2025-12-23 14:20:56.218 | 50 | INFO | STARTUP | <<start-node-3>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node3 | 4.710s | 2025-12-23 14:20:56.220 | 51 | INFO | STARTUP | <<start-node-3>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node3 | 4.711s | 2025-12-23 14:20:56.221 | 52 | INFO | STARTUP | <<app: appMain 3>> | ConsistencyTestingToolMain: | run called in Main. | |
| node3 | 4.713s | 2025-12-23 14:20:56.223 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 185.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node3 | 4.718s | 2025-12-23 14:20:56.228 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node1 | 5.225s | 2025-12-23 14:20:56.735 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node1 | 5.317s | 2025-12-23 14:20:56.827 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 5.320s | 2025-12-23 14:20:56.830 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node1 | 5.357s | 2025-12-23 14:20:56.867 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node0 | 5.644s | 2025-12-23 14:20:57.154 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26326598] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=194860, randomLong=1715739060338673034, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=14930, randomLong=2908123485826903351, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1438890, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms) | |||||||||
| node2 | 5.661s | 2025-12-23 14:20:57.171 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26259031] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=201629, randomLong=9156796902322492785, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11410, randomLong=1965704107043266174, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1203990, data=35, exception=null] OS Health Check Report - Complete (took 1029 ms) | |||||||||
| node0 | 5.679s | 2025-12-23 14:20:57.189 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node0 | 5.688s | 2025-12-23 14:20:57.198 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node0 | 5.690s | 2025-12-23 14:20:57.200 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node2 | 5.701s | 2025-12-23 14:20:57.211 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node2 | 5.710s | 2025-12-23 14:20:57.220 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node2 | 5.713s | 2025-12-23 14:20:57.223 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node0 | 5.790s | 2025-12-23 14:20:57.300 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "I8GRmQ==", "port": 30124 }, { "ipAddressV4": "CoAAWg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "iHSdeQ==", "port": 30125 }, { "ipAddressV4": "CoAAXQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "iHeLPg==", "port": 30126 }, { "ipAddressV4": "CoAAXA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkeHcg==", "port": 30127 }, { "ipAddressV4": "CoAAXg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IkSEKg==", "port": 30128 }, { "ipAddressV4": "CoAAWw==", "port": 30128 }] }] } | |||||||||
| node2 | 5.811s | 2025-12-23 14:20:57.321 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "I8GRmQ==", "port": 30124 }, { "ipAddressV4": "CoAAWg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "iHSdeQ==", "port": 30125 }, { "ipAddressV4": "CoAAXQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "iHeLPg==", "port": 30126 }, { "ipAddressV4": "CoAAXA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkeHcg==", "port": 30127 }, { "ipAddressV4": "CoAAXg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IkSEKg==", "port": 30128 }, { "ipAddressV4": "CoAAWw==", "port": 30128 }] }] } | |||||||||
| node0 | 5.818s | 2025-12-23 14:20:57.328 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/0/ConsistencyTestLog.csv | |
| node0 | 5.819s | 2025-12-23 14:20:57.329 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node0 | 5.836s | 2025-12-23 14:20:57.346 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 2e810ee8e8889a9749af2a6dd636308369065f2d81f872f03dcdfdfa9f248a6a2ef095a6aeb09246c380ec06c280924a (root) VirtualMap state / south-tackle-upper-pudding {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}} | |||||||||
| node2 | 5.837s | 2025-12-23 14:20:57.347 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/2/ConsistencyTestLog.csv | |
| node2 | 5.838s | 2025-12-23 14:20:57.348 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node0 | 5.839s | 2025-12-23 14:20:57.349 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node2 | 5.855s | 2025-12-23 14:20:57.365 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 2e810ee8e8889a9749af2a6dd636308369065f2d81f872f03dcdfdfa9f248a6a2ef095a6aeb09246c380ec06c280924a (root) VirtualMap state / south-tackle-upper-pudding {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}} | |||||||||
| node2 | 5.858s | 2025-12-23 14:20:57.368 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node0 | 6.079s | 2025-12-23 14:20:57.589 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node0 | 6.084s | 2025-12-23 14:20:57.594 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node2 | 6.086s | 2025-12-23 14:20:57.596 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node0 | 6.088s | 2025-12-23 14:20:57.598 | 43 | INFO | STARTUP | <<start-node-0>> | ConsistencyTestingToolMain: | init called in Main for node 0. | |
| node0 | 6.089s | 2025-12-23 14:20:57.599 | 44 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | Starting platform 0 | |
| node0 | 6.090s | 2025-12-23 14:20:57.600 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node2 | 6.091s | 2025-12-23 14:20:57.601 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node0 | 6.094s | 2025-12-23 14:20:57.604 | 46 | INFO | STARTUP | <<start-node-0>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node0 | 6.095s | 2025-12-23 14:20:57.605 | 47 | INFO | STARTUP | <<start-node-0>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node0 | 6.095s | 2025-12-23 14:20:57.605 | 48 | INFO | STARTUP | <<start-node-0>> | InputWireChecks: | All input wires have been bound. | |
| node0 | 6.097s | 2025-12-23 14:20:57.607 | 49 | WARN | STARTUP | <<start-node-0>> | PcesFileTracker: | No preconsensus event files available | |
| node0 | 6.098s | 2025-12-23 14:20:57.608 | 50 | INFO | STARTUP | <<start-node-0>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node2 | 6.098s | 2025-12-23 14:20:57.608 | 43 | INFO | STARTUP | <<start-node-2>> | ConsistencyTestingToolMain: | init called in Main for node 2. | |
| node2 | 6.098s | 2025-12-23 14:20:57.608 | 44 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | Starting platform 2 | |
| node0 | 6.100s | 2025-12-23 14:20:57.610 | 51 | INFO | STARTUP | <<start-node-0>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node2 | 6.100s | 2025-12-23 14:20:57.610 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node0 | 6.101s | 2025-12-23 14:20:57.611 | 52 | INFO | STARTUP | <<app: appMain 0>> | ConsistencyTestingToolMain: | run called in Main. | |
| node0 | 6.102s | 2025-12-23 14:20:57.612 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 204.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node2 | 6.103s | 2025-12-23 14:20:57.613 | 46 | INFO | STARTUP | <<start-node-2>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node2 | 6.104s | 2025-12-23 14:20:57.614 | 47 | INFO | STARTUP | <<start-node-2>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node2 | 6.104s | 2025-12-23 14:20:57.614 | 48 | INFO | STARTUP | <<start-node-2>> | InputWireChecks: | All input wires have been bound. | |
| node2 | 6.107s | 2025-12-23 14:20:57.617 | 49 | WARN | STARTUP | <<start-node-2>> | PcesFileTracker: | No preconsensus event files available | |
| node2 | 6.107s | 2025-12-23 14:20:57.617 | 50 | INFO | STARTUP | <<start-node-2>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node0 | 6.108s | 2025-12-23 14:20:57.618 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node2 | 6.109s | 2025-12-23 14:20:57.619 | 51 | INFO | STARTUP | <<start-node-2>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node2 | 6.111s | 2025-12-23 14:20:57.621 | 52 | INFO | STARTUP | <<app: appMain 2>> | ConsistencyTestingToolMain: | run called in Main. | |
| node2 | 6.113s | 2025-12-23 14:20:57.623 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 197.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node2 | 6.118s | 2025-12-23 14:20:57.628 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 4.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node1 | 6.181s | 2025-12-23 14:20:57.691 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 6.182s | 2025-12-23 14:20:57.692 | 26 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node1 | 6.188s | 2025-12-23 14:20:57.698 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node1 | 6.198s | 2025-12-23 14:20:57.708 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node1 | 6.200s | 2025-12-23 14:20:57.710 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6.276s | 2025-12-23 14:20:57.786 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1663ms | |
| node4 | 6.286s | 2025-12-23 14:20:57.796 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 6.290s | 2025-12-23 14:20:57.800 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 6.328s | 2025-12-23 14:20:57.838 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 6.393s | 2025-12-23 14:20:57.903 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 6.394s | 2025-12-23 14:20:57.904 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 7.248s | 2025-12-23 14:20:58.758 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node1 | 7.338s | 2025-12-23 14:20:58.848 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26224853] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=225740, randomLong=-2511294062089011557, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=11730, randomLong=4838310208209543782, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1456740, data=35, exception=null] OS Health Check Report - Complete (took 1026 ms) | |||||||||
| node4 | 7.358s | 2025-12-23 14:20:58.868 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 7.362s | 2025-12-23 14:20:58.872 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | No saved states were found on disk. | |
| node1 | 7.370s | 2025-12-23 14:20:58.880 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node1 | 7.379s | 2025-12-23 14:20:58.889 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node1 | 7.381s | 2025-12-23 14:20:58.891 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 7.410s | 2025-12-23 14:20:58.920 | 21 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node1 | 7.479s | 2025-12-23 14:20:58.989 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "I8GRmQ==", "port": 30124 }, { "ipAddressV4": "CoAAWg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "iHSdeQ==", "port": 30125 }, { "ipAddressV4": "CoAAXQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "iHeLPg==", "port": 30126 }, { "ipAddressV4": "CoAAXA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkeHcg==", "port": 30127 }, { "ipAddressV4": "CoAAXg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IkSEKg==", "port": 30128 }, { "ipAddressV4": "CoAAWw==", "port": 30128 }] }] } | |||||||||
| node1 | 7.504s | 2025-12-23 14:20:59.014 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/1/ConsistencyTestLog.csv | |
| node1 | 7.505s | 2025-12-23 14:20:59.015 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node1 | 7.522s | 2025-12-23 14:20:59.032 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 2e810ee8e8889a9749af2a6dd636308369065f2d81f872f03dcdfdfa9f248a6a2ef095a6aeb09246c380ec06c280924a (root) VirtualMap state / south-tackle-upper-pudding {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}} | |||||||||
| node1 | 7.525s | 2025-12-23 14:20:59.035 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node3 | 7.709s | 2025-12-23 14:20:59.219 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting3.csv' ] | |
| node3 | 7.711s | 2025-12-23 14:20:59.221 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node1 | 7.753s | 2025-12-23 14:20:59.263 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node1 | 7.758s | 2025-12-23 14:20:59.268 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node1 | 7.763s | 2025-12-23 14:20:59.273 | 43 | INFO | STARTUP | <<start-node-1>> | ConsistencyTestingToolMain: | init called in Main for node 1. | |
| node1 | 7.764s | 2025-12-23 14:20:59.274 | 44 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | Starting platform 1 | |
| node1 | 7.765s | 2025-12-23 14:20:59.275 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node1 | 7.768s | 2025-12-23 14:20:59.278 | 46 | INFO | STARTUP | <<start-node-1>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node1 | 7.770s | 2025-12-23 14:20:59.280 | 47 | INFO | STARTUP | <<start-node-1>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node1 | 7.770s | 2025-12-23 14:20:59.280 | 48 | INFO | STARTUP | <<start-node-1>> | InputWireChecks: | All input wires have been bound. | |
| node1 | 7.772s | 2025-12-23 14:20:59.282 | 49 | WARN | STARTUP | <<start-node-1>> | PcesFileTracker: | No preconsensus event files available | |
| node1 | 7.773s | 2025-12-23 14:20:59.283 | 50 | INFO | STARTUP | <<start-node-1>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node1 | 7.774s | 2025-12-23 14:20:59.284 | 51 | INFO | STARTUP | <<start-node-1>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node1 | 7.776s | 2025-12-23 14:20:59.286 | 52 | INFO | STARTUP | <<app: appMain 1>> | ConsistencyTestingToolMain: | run called in Main. | |
| node1 | 7.779s | 2025-12-23 14:20:59.289 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 196.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node1 | 7.785s | 2025-12-23 14:20:59.295 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node4 | 8.313s | 2025-12-23 14:20:59.823 | 24 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 8.315s | 2025-12-23 14:20:59.825 | 27 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 8.323s | 2025-12-23 14:20:59.833 | 28 | INFO | STARTUP | <main> | AddressBookInitializer: | Starting from genesis: using the config address book. | |
| node4 | 8.335s | 2025-12-23 14:20:59.845 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 8.339s | 2025-12-23 14:20:59.849 | 30 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node0 | 9.103s | 2025-12-23 14:21:00.613 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting0.csv' ] | |
| node0 | 9.106s | 2025-12-23 14:21:00.616 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node2 | 9.112s | 2025-12-23 14:21:00.622 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting2.csv' ] | |
| node2 | 9.116s | 2025-12-23 14:21:00.626 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 9.480s | 2025-12-23 14:21:00.990 | 31 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=25996032] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=238480, randomLong=-7892803234492197882, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=34270, randomLong=675227229027182580, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=2007310, data=35, exception=null] OS Health Check Report - Complete (took 1033 ms) | |||||||||
| node4 | 9.523s | 2025-12-23 14:21:01.033 | 32 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 9.534s | 2025-12-23 14:21:01.044 | 33 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 9.537s | 2025-12-23 14:21:01.047 | 34 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 9.656s | 2025-12-23 14:21:01.166 | 35 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "I8GRmQ==", "port": 30124 }, { "ipAddressV4": "CoAAWg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "iHSdeQ==", "port": 30125 }, { "ipAddressV4": "CoAAXQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "iHeLPg==", "port": 30126 }, { "ipAddressV4": "CoAAXA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkeHcg==", "port": 30127 }, { "ipAddressV4": "CoAAXg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IkSEKg==", "port": 30128 }, { "ipAddressV4": "CoAAWw==", "port": 30128 }] }] } | |||||||||
| node4 | 9.688s | 2025-12-23 14:21:01.198 | 36 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 9.689s | 2025-12-23 14:21:01.199 | 37 | INFO | STARTUP | <main> | TransactionHandlingHistory: | No log file found. Starting without any previous history | |
| node4 | 9.711s | 2025-12-23 14:21:01.221 | 38 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 0 Timestamp: 1970-01-01T00:00:00Z Next consensus number: 0 Legacy running event hash: null Legacy running event mnemonic: null Rounds non-ancient: 0 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1 Root hash: 2e810ee8e8889a9749af2a6dd636308369065f2d81f872f03dcdfdfa9f248a6a2ef095a6aeb09246c380ec06c280924a (root) VirtualMap state / south-tackle-upper-pudding {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":2,"lastLeafPath":4},"Singletons":{"RosterService.ROSTER_STATE":{"path":2,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":3,"mnemonic":"normal-stage-book-frozen"}}} | |||||||||
| node4 | 9.717s | 2025-12-23 14:21:01.227 | 40 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node4 | 9.955s | 2025-12-23 14:21:01.465 | 41 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 38b060a751ac96384cd9327eb1b1e36a21fdb71114be07434c0cc7bf63f6e1da274edebfe76f65fbd51ad2f14898b95b | |
| node4 | 9.963s | 2025-12-23 14:21:01.473 | 42 | INFO | STARTUP | <platformForkJoinThread-2> | Shadowgraph: | Shadowgraph starting from expiration threshold 1 | |
| node4 | 9.969s | 2025-12-23 14:21:01.479 | 43 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 9.970s | 2025-12-23 14:21:01.480 | 44 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 9.973s | 2025-12-23 14:21:01.483 | 45 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 9.977s | 2025-12-23 14:21:01.487 | 46 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 9.979s | 2025-12-23 14:21:01.489 | 47 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 9.980s | 2025-12-23 14:21:01.490 | 48 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 9.983s | 2025-12-23 14:21:01.493 | 49 | WARN | STARTUP | <<start-node-4>> | PcesFileTracker: | No preconsensus event files available | |
| node4 | 9.984s | 2025-12-23 14:21:01.494 | 50 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 0 | |
| node4 | 9.986s | 2025-12-23 14:21:01.496 | 51 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 0 preconsensus events with max birth round -1. These events contained 0 transactions. 0 rounds reached consensus spanning 0.0 nanoseconds of consensus time. The latest round to reach consensus is round 0. Replay took 0.0 nanoseconds. | |
| node4 | 9.988s | 2025-12-23 14:21:01.498 | 52 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 9.991s | 2025-12-23 14:21:01.501 | 53 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 204.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 9.998s | 2025-12-23 14:21:01.508 | 54 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 5.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node1 | 10.777s | 2025-12-23 14:21:02.287 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting1.csv' ] | |
| node1 | 10.781s | 2025-12-23 14:21:02.291 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 12.985s | 2025-12-23 14:21:04.495 | 55 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 12.989s | 2025-12-23 14:21:04.499 | 56 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node3 | 14.808s | 2025-12-23 14:21:06.318 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 16.197s | 2025-12-23 14:21:07.707 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node2 | 16.206s | 2025-12-23 14:21:07.716 | 57 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 17.278s | 2025-12-23 14:21:08.788 | 59 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node2 | 17.313s | 2025-12-23 14:21:08.823 | 59 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node3 | 17.313s | 2025-12-23 14:21:08.823 | 58 | INFO | PLATFORM_STATUS | <platformForkJoinThread-1> | StatusStateMachine: | Platform spent 2.5 s in CHECKING. Now in ACTIVE | |
| node3 | 17.315s | 2025-12-23 14:21:08.825 | 60 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node4 | 17.365s | 2025-12-23 14:21:08.875 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node1 | 17.507s | 2025-12-23 14:21:09.017 | 58 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 1 created, will eventually be written to disk, for reason: FIRST_ROUND_AFTER_GENESIS | |
| node1 | 17.602s | 2025-12-23 14:21:09.112 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node1 | 17.604s | 2025-12-23 14:21:09.114 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node0 | 17.629s | 2025-12-23 14:21:09.139 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node0 | 17.631s | 2025-12-23 14:21:09.141 | 75 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node2 | 17.663s | 2025-12-23 14:21:09.173 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node2 | 17.664s | 2025-12-23 14:21:09.174 | 75 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node3 | 17.664s | 2025-12-23 14:21:09.174 | 75 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node3 | 17.666s | 2025-12-23 14:21:09.176 | 76 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node4 | 17.778s | 2025-12-23 14:21:09.288 | 73 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 1 state to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node4 | 17.780s | 2025-12-23 14:21:09.290 | 74 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node2 | 17.852s | 2025-12-23 14:21:09.362 | 97 | INFO | PLATFORM_STATUS | <platformForkJoinThread-2> | StatusStateMachine: | Platform spent 1.6 s in CHECKING. Now in ACTIVE | |
| node1 | 17.861s | 2025-12-23 14:21:09.371 | 106 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node1 | 17.864s | 2025-12-23 14:21:09.374 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-12-23T14:21:06.508680557Z Next consensus number: 1 Legacy running event hash: 389cb1b4f8d0a46ae567c2f0a4e8f7c92588307891004d51ef292da7b9ec03e9a25873bf1e67797c19cd8ad90c1eeb3c Legacy running event mnemonic: bean-sister-toward-index Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 45b310fe1386131b7310cde427620034b5a375d08e6ac20c458ab4039cf5774d6d613a86ec9efdac9fa8952012551b6b (root) VirtualMap state / reopen-cabin-ignore-gloom {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"vanish-alpha-injury-grocery"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trouble-clinic-behave-spike"}}} | |||||||||
| node1 | 17.872s | 2025-12-23 14:21:09.382 | 108 | INFO | PLATFORM_STATUS | <platformForkJoinThread-8> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node0 | 17.875s | 2025-12-23 14:21:09.385 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node0 | 17.878s | 2025-12-23 14:21:09.388 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-12-23T14:21:06.508680557Z Next consensus number: 1 Legacy running event hash: 389cb1b4f8d0a46ae567c2f0a4e8f7c92588307891004d51ef292da7b9ec03e9a25873bf1e67797c19cd8ad90c1eeb3c Legacy running event mnemonic: bean-sister-toward-index Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 45b310fe1386131b7310cde427620034b5a375d08e6ac20c458ab4039cf5774d6d613a86ec9efdac9fa8952012551b6b (root) VirtualMap state / reopen-cabin-ignore-gloom {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"vanish-alpha-injury-grocery"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trouble-clinic-behave-spike"}}} | |||||||||
| node0 | 17.884s | 2025-12-23 14:21:09.394 | 110 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 1.7 s in CHECKING. Now in ACTIVE | |
| node3 | 17.891s | 2025-12-23 14:21:09.401 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node3 | 17.894s | 2025-12-23 14:21:09.404 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-12-23T14:21:06.508680557Z Next consensus number: 1 Legacy running event hash: 389cb1b4f8d0a46ae567c2f0a4e8f7c92588307891004d51ef292da7b9ec03e9a25873bf1e67797c19cd8ad90c1eeb3c Legacy running event mnemonic: bean-sister-toward-index Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 45b310fe1386131b7310cde427620034b5a375d08e6ac20c458ab4039cf5774d6d613a86ec9efdac9fa8952012551b6b (root) VirtualMap state / reopen-cabin-ignore-gloom {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"vanish-alpha-injury-grocery"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trouble-clinic-behave-spike"}}} | |||||||||
| node1 | 17.910s | 2025-12-23 14:21:09.420 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 17.911s | 2025-12-23 14:21:09.421 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 17.911s | 2025-12-23 14:21:09.421 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 17.912s | 2025-12-23 14:21:09.422 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 17.919s | 2025-12-23 14:21:09.429 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 17.919s | 2025-12-23 14:21:09.429 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 17.920s | 2025-12-23 14:21:09.430 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 17.920s | 2025-12-23 14:21:09.430 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 17.920s | 2025-12-23 14:21:09.430 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node0 | 17.921s | 2025-12-23 14:21:09.431 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 17.923s | 2025-12-23 14:21:09.433 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-12-23T14:21:06.508680557Z Next consensus number: 1 Legacy running event hash: 389cb1b4f8d0a46ae567c2f0a4e8f7c92588307891004d51ef292da7b9ec03e9a25873bf1e67797c19cd8ad90c1eeb3c Legacy running event mnemonic: bean-sister-toward-index Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 45b310fe1386131b7310cde427620034b5a375d08e6ac20c458ab4039cf5774d6d613a86ec9efdac9fa8952012551b6b (root) VirtualMap state / reopen-cabin-ignore-gloom {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"vanish-alpha-injury-grocery"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trouble-clinic-behave-spike"}}} | |||||||||
| node0 | 17.928s | 2025-12-23 14:21:09.438 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 17.931s | 2025-12-23 14:21:09.441 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 17.932s | 2025-12-23 14:21:09.442 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 17.932s | 2025-12-23 14:21:09.442 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 17.933s | 2025-12-23 14:21:09.443 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 17.938s | 2025-12-23 14:21:09.448 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 17.963s | 2025-12-23 14:21:09.473 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 17.964s | 2025-12-23 14:21:09.474 | 114 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 17.964s | 2025-12-23 14:21:09.474 | 115 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 17.965s | 2025-12-23 14:21:09.475 | 116 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 17.972s | 2025-12-23 14:21:09.482 | 117 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 18.040s | 2025-12-23 14:21:09.550 | 107 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/1 for 1 | |
| node4 | 18.044s | 2025-12-23 14:21:09.554 | 108 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 1 Timestamp: 2025-12-23T14:21:06.508680557Z Next consensus number: 1 Legacy running event hash: 389cb1b4f8d0a46ae567c2f0a4e8f7c92588307891004d51ef292da7b9ec03e9a25873bf1e67797c19cd8ad90c1eeb3c Legacy running event mnemonic: bean-sister-toward-index Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1450302654 Root hash: 45b310fe1386131b7310cde427620034b5a375d08e6ac20c458ab4039cf5774d6d613a86ec9efdac9fa8952012551b6b (root) VirtualMap state / reopen-cabin-ignore-gloom {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"vanish-alpha-injury-grocery"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"paddle-robust-token-stove"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"trouble-clinic-behave-spike"}}} | |||||||||
| node4 | 18.088s | 2025-12-23 14:21:09.598 | 109 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 18.088s | 2025-12-23 14:21:09.598 | 110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 1 File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 18.089s | 2025-12-23 14:21:09.599 | 111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 18.091s | 2025-12-23 14:21:09.601 | 112 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 18.098s | 2025-12-23 14:21:09.608 | 113 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 1 to disk. Reason: FIRST_ROUND_AFTER_GENESIS, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 {"round":1,"freezeState":false,"reason":"FIRST_ROUND_AFTER_GENESIS","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 19.129s | 2025-12-23 14:21:10.639 | 147 | INFO | PLATFORM_STATUS | <platformForkJoinThread-4> | StatusStateMachine: | Platform spent 1.3 s in CHECKING. Now in ACTIVE | |
| node4 | 20.082s | 2025-12-23 14:21:11.592 | 168 | INFO | PLATFORM_STATUS | <platformForkJoinThread-6> | StatusStateMachine: | Platform spent 10.1 s in OBSERVING. Now in CHECKING | |
| node4 | 21.791s | 2025-12-23 14:21:13.301 | 212 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 1.7 s in CHECKING. Now in ACTIVE | |
| node0 | 1m 9.572s | 2025-12-23 14:22:01.082 | 1410 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 119 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 1m 9.587s | 2025-12-23 14:22:01.097 | 1399 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 119 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 1m 9.597s | 2025-12-23 14:22:01.107 | 1402 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 119 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 1m 9.600s | 2025-12-23 14:22:01.110 | 1377 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 119 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 1m 9.652s | 2025-12-23 14:22:01.162 | 1384 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 119 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 1m 9.842s | 2025-12-23 14:22:01.352 | 1380 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 119 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/119 | |
| node2 | 1m 9.843s | 2025-12-23 14:22:01.353 | 1381 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node3 | 1m 9.876s | 2025-12-23 14:22:01.386 | 1387 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 119 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/119 | |
| node3 | 1m 9.876s | 2025-12-23 14:22:01.386 | 1388 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node1 | 1m 9.913s | 2025-12-23 14:22:01.423 | 1405 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 119 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/119 | |
| node1 | 1m 9.914s | 2025-12-23 14:22:01.424 | 1406 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node2 | 1m 9.929s | 2025-12-23 14:22:01.439 | 1420 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node2 | 1m 9.931s | 2025-12-23 14:22:01.441 | 1421 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 119 Timestamp: 2025-12-23T14:22:00.084811Z Next consensus number: 4037 Legacy running event hash: 432a2824de825675f78f48cad515a76853db1a1a9a1e7d0b7d562762deca4b6ac132fcb62f3bab7a00f29c987ad35581 Legacy running event mnemonic: grab-afraid-cool-solid Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 401436803 Root hash: 5e9a29834a4f4fddb6e34b3e2d4042fb9deacfe8c7bf35c01de82b0d9fb0cefedf4f03344ce0509d473cc12c575afef8 (root) VirtualMap state / spy-corn-enemy-wing {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"derive-cart-betray-zoo"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"board-champion-remember-emotion"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"scissors-dress-enjoy-exotic"}}} | |||||||||
| node2 | 1m 9.940s | 2025-12-23 14:22:01.450 | 1422 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 9.940s | 2025-12-23 14:22:01.450 | 1423 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 92 File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 1m 9.941s | 2025-12-23 14:22:01.451 | 1424 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 1m 9.944s | 2025-12-23 14:22:01.454 | 1425 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 1m 9.945s | 2025-12-23 14:22:01.455 | 1426 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 119 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/119 {"round":119,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/119/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 1m 9.951s | 2025-12-23 14:22:01.461 | 1402 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 119 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/119 | |
| node4 | 1m 9.952s | 2025-12-23 14:22:01.462 | 1403 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node3 | 1m 9.958s | 2025-12-23 14:22:01.468 | 1427 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node3 | 1m 9.961s | 2025-12-23 14:22:01.471 | 1428 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 119 Timestamp: 2025-12-23T14:22:00.084811Z Next consensus number: 4037 Legacy running event hash: 432a2824de825675f78f48cad515a76853db1a1a9a1e7d0b7d562762deca4b6ac132fcb62f3bab7a00f29c987ad35581 Legacy running event mnemonic: grab-afraid-cool-solid Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 401436803 Root hash: 5e9a29834a4f4fddb6e34b3e2d4042fb9deacfe8c7bf35c01de82b0d9fb0cefedf4f03344ce0509d473cc12c575afef8 (root) VirtualMap state / spy-corn-enemy-wing {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"derive-cart-betray-zoo"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"board-champion-remember-emotion"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"scissors-dress-enjoy-exotic"}}} | |||||||||
| node3 | 1m 9.969s | 2025-12-23 14:22:01.479 | 1429 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 9.969s | 2025-12-23 14:22:01.479 | 1430 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 92 File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 1m 9.970s | 2025-12-23 14:22:01.480 | 1431 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 1m 9.973s | 2025-12-23 14:22:01.483 | 1432 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 1m 9.974s | 2025-12-23 14:22:01.484 | 1433 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 119 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/119 {"round":119,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/119/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 1m 9.994s | 2025-12-23 14:22:01.504 | 1413 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 119 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/119 | |
| node0 | 1m 9.995s | 2025-12-23 14:22:01.505 | 1414 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node1 | 1m 9.998s | 2025-12-23 14:22:01.508 | 1445 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node1 | 1m 10.001s | 2025-12-23 14:22:01.511 | 1446 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 119 Timestamp: 2025-12-23T14:22:00.084811Z Next consensus number: 4037 Legacy running event hash: 432a2824de825675f78f48cad515a76853db1a1a9a1e7d0b7d562762deca4b6ac132fcb62f3bab7a00f29c987ad35581 Legacy running event mnemonic: grab-afraid-cool-solid Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 401436803 Root hash: 5e9a29834a4f4fddb6e34b3e2d4042fb9deacfe8c7bf35c01de82b0d9fb0cefedf4f03344ce0509d473cc12c575afef8 (root) VirtualMap state / spy-corn-enemy-wing {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"derive-cart-betray-zoo"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"board-champion-remember-emotion"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"scissors-dress-enjoy-exotic"}}} | |||||||||
| node1 | 1m 10.009s | 2025-12-23 14:22:01.519 | 1447 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 10.010s | 2025-12-23 14:22:01.520 | 1448 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 92 File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 1m 10.010s | 2025-12-23 14:22:01.520 | 1449 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 1m 10.013s | 2025-12-23 14:22:01.523 | 1450 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 1m 10.014s | 2025-12-23 14:22:01.524 | 1451 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 119 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/119 {"round":119,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/119/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 1m 10.046s | 2025-12-23 14:22:01.556 | 1442 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node4 | 1m 10.049s | 2025-12-23 14:22:01.559 | 1443 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 119 Timestamp: 2025-12-23T14:22:00.084811Z Next consensus number: 4037 Legacy running event hash: 432a2824de825675f78f48cad515a76853db1a1a9a1e7d0b7d562762deca4b6ac132fcb62f3bab7a00f29c987ad35581 Legacy running event mnemonic: grab-afraid-cool-solid Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 401436803 Root hash: 5e9a29834a4f4fddb6e34b3e2d4042fb9deacfe8c7bf35c01de82b0d9fb0cefedf4f03344ce0509d473cc12c575afef8 (root) VirtualMap state / spy-corn-enemy-wing {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"derive-cart-betray-zoo"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"board-champion-remember-emotion"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"scissors-dress-enjoy-exotic"}}} | |||||||||
| node4 | 1m 10.059s | 2025-12-23 14:22:01.569 | 1444 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 10.060s | 2025-12-23 14:22:01.570 | 1445 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 92 File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 1m 10.061s | 2025-12-23 14:22:01.571 | 1446 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 1m 10.064s | 2025-12-23 14:22:01.574 | 1447 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 1m 10.065s | 2025-12-23 14:22:01.575 | 1448 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 119 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/119 {"round":119,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/119/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 1m 10.075s | 2025-12-23 14:22:01.585 | 1445 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/8 for 119 | |
| node0 | 1m 10.077s | 2025-12-23 14:22:01.587 | 1446 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 119 Timestamp: 2025-12-23T14:22:00.084811Z Next consensus number: 4037 Legacy running event hash: 432a2824de825675f78f48cad515a76853db1a1a9a1e7d0b7d562762deca4b6ac132fcb62f3bab7a00f29c987ad35581 Legacy running event mnemonic: grab-afraid-cool-solid Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 401436803 Root hash: 5e9a29834a4f4fddb6e34b3e2d4042fb9deacfe8c7bf35c01de82b0d9fb0cefedf4f03344ce0509d473cc12c575afef8 (root) VirtualMap state / spy-corn-enemy-wing {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"derive-cart-betray-zoo"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"board-champion-remember-emotion"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"scissors-dress-enjoy-exotic"}}} | |||||||||
| node0 | 1m 10.085s | 2025-12-23 14:22:01.595 | 1447 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 10.085s | 2025-12-23 14:22:01.595 | 1448 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 92 File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 1m 10.086s | 2025-12-23 14:22:01.596 | 1449 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 1m 10.089s | 2025-12-23 14:22:01.599 | 1450 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 1m 10.090s | 2025-12-23 14:22:01.600 | 1451 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 119 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/119 {"round":119,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/119/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 2m 9.430s | 2025-12-23 14:23:00.940 | 2841 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 249 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 2m 9.446s | 2025-12-23 14:23:00.956 | 2864 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 249 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 2m 9.452s | 2025-12-23 14:23:00.962 | 2844 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 249 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 2m 9.524s | 2025-12-23 14:23:01.034 | 2847 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 249 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 2m 9.547s | 2025-12-23 14:23:01.057 | 2880 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 249 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 2m 9.624s | 2025-12-23 14:23:01.134 | 2867 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 249 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/249 | |
| node1 | 2m 9.626s | 2025-12-23 14:23:01.136 | 2868 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node3 | 2m 9.694s | 2025-12-23 14:23:01.204 | 2847 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 249 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/249 | |
| node3 | 2m 9.695s | 2025-12-23 14:23:01.205 | 2848 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node4 | 2m 9.698s | 2025-12-23 14:23:01.208 | 2850 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 249 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/249 | |
| node4 | 2m 9.699s | 2025-12-23 14:23:01.209 | 2851 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node0 | 2m 9.726s | 2025-12-23 14:23:01.236 | 2883 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 249 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/249 | |
| node0 | 2m 9.727s | 2025-12-23 14:23:01.237 | 2884 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node1 | 2m 9.729s | 2025-12-23 14:23:01.239 | 2899 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node1 | 2m 9.731s | 2025-12-23 14:23:01.241 | 2900 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 249 Timestamp: 2025-12-23T14:23:00.001294294Z Next consensus number: 8853 Legacy running event hash: 2f443d8175357beb3cc60088bff2294b623be0a584d2df0234ee4b33bdc51a23c2b3bc2914ffe08d4d19c6b2cf4f11bf Legacy running event mnemonic: trigger-rabbit-pluck-latin Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 29905041 Root hash: 4a59f27efe69ed7267dffcba2bf8acdb5fd7076dcb958aa21dada7954902b5f358ebc78c1c34db0eef925b70d81f3030 (root) VirtualMap state / enroll-raccoon-cream-fade {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"captain-rather-mobile-duty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"top-income-rubber-ordinary"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"play-kit-pond-hazard"}}} | |||||||||
| node1 | 2m 9.739s | 2025-12-23 14:23:01.249 | 2901 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 9.739s | 2025-12-23 14:23:01.249 | 2902 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 221 File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 2m 9.739s | 2025-12-23 14:23:01.249 | 2903 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 2m 9.746s | 2025-12-23 14:23:01.256 | 2904 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 2m 9.746s | 2025-12-23 14:23:01.256 | 2905 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 249 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/249 {"round":249,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/249/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 2m 9.763s | 2025-12-23 14:23:01.273 | 2844 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 249 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/249 | |
| node2 | 2m 9.764s | 2025-12-23 14:23:01.274 | 2845 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node3 | 2m 9.783s | 2025-12-23 14:23:01.293 | 2887 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node3 | 2m 9.785s | 2025-12-23 14:23:01.295 | 2888 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 249 Timestamp: 2025-12-23T14:23:00.001294294Z Next consensus number: 8853 Legacy running event hash: 2f443d8175357beb3cc60088bff2294b623be0a584d2df0234ee4b33bdc51a23c2b3bc2914ffe08d4d19c6b2cf4f11bf Legacy running event mnemonic: trigger-rabbit-pluck-latin Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 29905041 Root hash: 4a59f27efe69ed7267dffcba2bf8acdb5fd7076dcb958aa21dada7954902b5f358ebc78c1c34db0eef925b70d81f3030 (root) VirtualMap state / enroll-raccoon-cream-fade {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"captain-rather-mobile-duty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"top-income-rubber-ordinary"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"play-kit-pond-hazard"}}} | |||||||||
| node3 | 2m 9.793s | 2025-12-23 14:23:01.303 | 2889 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 9.793s | 2025-12-23 14:23:01.303 | 2890 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 221 File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 2m 9.793s | 2025-12-23 14:23:01.303 | 2891 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 2m 9.799s | 2025-12-23 14:23:01.309 | 2892 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 2m 9.800s | 2025-12-23 14:23:01.310 | 2893 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 249 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/249 {"round":249,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/249/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 2m 9.800s | 2025-12-23 14:23:01.310 | 2890 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node4 | 2m 9.803s | 2025-12-23 14:23:01.313 | 2891 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 249 Timestamp: 2025-12-23T14:23:00.001294294Z Next consensus number: 8853 Legacy running event hash: 2f443d8175357beb3cc60088bff2294b623be0a584d2df0234ee4b33bdc51a23c2b3bc2914ffe08d4d19c6b2cf4f11bf Legacy running event mnemonic: trigger-rabbit-pluck-latin Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 29905041 Root hash: 4a59f27efe69ed7267dffcba2bf8acdb5fd7076dcb958aa21dada7954902b5f358ebc78c1c34db0eef925b70d81f3030 (root) VirtualMap state / enroll-raccoon-cream-fade {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"captain-rather-mobile-duty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"top-income-rubber-ordinary"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"play-kit-pond-hazard"}}} | |||||||||
| node0 | 2m 9.813s | 2025-12-23 14:23:01.323 | 2915 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node4 | 2m 9.813s | 2025-12-23 14:23:01.323 | 2892 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 9.814s | 2025-12-23 14:23:01.324 | 2893 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 221 File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 2m 9.814s | 2025-12-23 14:23:01.324 | 2894 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 2m 9.815s | 2025-12-23 14:23:01.325 | 2916 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 249 Timestamp: 2025-12-23T14:23:00.001294294Z Next consensus number: 8853 Legacy running event hash: 2f443d8175357beb3cc60088bff2294b623be0a584d2df0234ee4b33bdc51a23c2b3bc2914ffe08d4d19c6b2cf4f11bf Legacy running event mnemonic: trigger-rabbit-pluck-latin Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 29905041 Root hash: 4a59f27efe69ed7267dffcba2bf8acdb5fd7076dcb958aa21dada7954902b5f358ebc78c1c34db0eef925b70d81f3030 (root) VirtualMap state / enroll-raccoon-cream-fade {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"captain-rather-mobile-duty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"top-income-rubber-ordinary"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"play-kit-pond-hazard"}}} | |||||||||
| node4 | 2m 9.821s | 2025-12-23 14:23:01.331 | 2895 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 2m 9.821s | 2025-12-23 14:23:01.331 | 2896 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 249 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/249 {"round":249,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/249/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 2m 9.822s | 2025-12-23 14:23:01.332 | 2917 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 9.823s | 2025-12-23 14:23:01.333 | 2918 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 221 File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 2m 9.823s | 2025-12-23 14:23:01.333 | 2919 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 2m 9.829s | 2025-12-23 14:23:01.339 | 2920 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 2m 9.830s | 2025-12-23 14:23:01.340 | 2921 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 249 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/249 {"round":249,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/249/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 2m 9.847s | 2025-12-23 14:23:01.357 | 2876 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/15 for 249 | |
| node2 | 2m 9.849s | 2025-12-23 14:23:01.359 | 2877 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 249 Timestamp: 2025-12-23T14:23:00.001294294Z Next consensus number: 8853 Legacy running event hash: 2f443d8175357beb3cc60088bff2294b623be0a584d2df0234ee4b33bdc51a23c2b3bc2914ffe08d4d19c6b2cf4f11bf Legacy running event mnemonic: trigger-rabbit-pluck-latin Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 29905041 Root hash: 4a59f27efe69ed7267dffcba2bf8acdb5fd7076dcb958aa21dada7954902b5f358ebc78c1c34db0eef925b70d81f3030 (root) VirtualMap state / enroll-raccoon-cream-fade {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"captain-rather-mobile-duty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"top-income-rubber-ordinary"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"play-kit-pond-hazard"}}} | |||||||||
| node2 | 2m 9.859s | 2025-12-23 14:23:01.369 | 2878 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 9.860s | 2025-12-23 14:23:01.370 | 2879 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 221 File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 2m 9.860s | 2025-12-23 14:23:01.370 | 2880 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 2m 9.867s | 2025-12-23 14:23:01.377 | 2881 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 2m 9.867s | 2025-12-23 14:23:01.377 | 2882 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 249 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/249 {"round":249,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/249/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 3m 9.542s | 2025-12-23 14:24:01.052 | 4343 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 384 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 3m 9.619s | 2025-12-23 14:24:01.129 | 4369 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 384 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 3m 9.627s | 2025-12-23 14:24:01.137 | 4330 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 384 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 3m 9.672s | 2025-12-23 14:24:01.182 | 4347 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 384 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 3m 9.751s | 2025-12-23 14:24:01.261 | 4389 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 384 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 3m 9.834s | 2025-12-23 14:24:01.344 | 4347 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 384 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/384 | |
| node2 | 3m 9.835s | 2025-12-23 14:24:01.345 | 4348 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node1 | 3m 9.840s | 2025-12-23 14:24:01.350 | 4392 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 384 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/384 | |
| node1 | 3m 9.841s | 2025-12-23 14:24:01.351 | 4393 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node3 | 3m 9.894s | 2025-12-23 14:24:01.404 | 4352 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 384 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/384 | |
| node3 | 3m 9.895s | 2025-12-23 14:24:01.405 | 4353 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node4 | 3m 9.914s | 2025-12-23 14:24:01.424 | 4335 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 384 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/384 | |
| node4 | 3m 9.915s | 2025-12-23 14:24:01.425 | 4336 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node2 | 3m 9.923s | 2025-12-23 14:24:01.433 | 4397 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node2 | 3m 9.925s | 2025-12-23 14:24:01.435 | 4398 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 384 Timestamp: 2025-12-23T14:24:00.075355Z Next consensus number: 13604 Legacy running event hash: 848326aa6269cd9d23c99f1e635a6472b89892924043a7cc6901065a42c853def1f1c8b573e53293e89e250d83b444c8 Legacy running event mnemonic: hurry-stem-essay-erase Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1872318141 Root hash: d91d7dcc542f3b00e72936a8794345efe7342b387377e6bc8712689af31c119f1160524c05b8d45ef2c59c26af852855 (root) VirtualMap state / symbol-ridge-magnet-gas {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"gallery-slim-shadow-goat"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"media-apple-lobster-harsh"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"distance-expect-share-betray"}}} | |||||||||
| node1 | 3m 9.928s | 2025-12-23 14:24:01.438 | 4426 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node1 | 3m 9.930s | 2025-12-23 14:24:01.440 | 4427 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 384 Timestamp: 2025-12-23T14:24:00.075355Z Next consensus number: 13604 Legacy running event hash: 848326aa6269cd9d23c99f1e635a6472b89892924043a7cc6901065a42c853def1f1c8b573e53293e89e250d83b444c8 Legacy running event mnemonic: hurry-stem-essay-erase Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1872318141 Root hash: d91d7dcc542f3b00e72936a8794345efe7342b387377e6bc8712689af31c119f1160524c05b8d45ef2c59c26af852855 (root) VirtualMap state / symbol-ridge-magnet-gas {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"gallery-slim-shadow-goat"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"media-apple-lobster-harsh"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"distance-expect-share-betray"}}} | |||||||||
| node2 | 3m 9.932s | 2025-12-23 14:24:01.442 | 4399 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 9.933s | 2025-12-23 14:24:01.443 | 4400 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 354 File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 3m 9.933s | 2025-12-23 14:24:01.443 | 4401 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 3m 9.938s | 2025-12-23 14:24:01.448 | 4428 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 9.938s | 2025-12-23 14:24:01.448 | 4429 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 354 File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 3m 9.938s | 2025-12-23 14:24:01.448 | 4430 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 3m 9.942s | 2025-12-23 14:24:01.452 | 4402 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 3m 9.943s | 2025-12-23 14:24:01.453 | 4403 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 384 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/384 {"round":384,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/384/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 3m 9.948s | 2025-12-23 14:24:01.458 | 4431 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 3m 9.949s | 2025-12-23 14:24:01.459 | 4432 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 384 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/384 {"round":384,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/384/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3m 9.974s | 2025-12-23 14:24:01.484 | 4374 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 384 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/384 | |
| node0 | 3m 9.975s | 2025-12-23 14:24:01.485 | 4375 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node3 | 3m 9.979s | 2025-12-23 14:24:01.489 | 4402 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node3 | 3m 9.981s | 2025-12-23 14:24:01.491 | 4403 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 384 Timestamp: 2025-12-23T14:24:00.075355Z Next consensus number: 13604 Legacy running event hash: 848326aa6269cd9d23c99f1e635a6472b89892924043a7cc6901065a42c853def1f1c8b573e53293e89e250d83b444c8 Legacy running event mnemonic: hurry-stem-essay-erase Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1872318141 Root hash: d91d7dcc542f3b00e72936a8794345efe7342b387377e6bc8712689af31c119f1160524c05b8d45ef2c59c26af852855 (root) VirtualMap state / symbol-ridge-magnet-gas {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"gallery-slim-shadow-goat"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"media-apple-lobster-harsh"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"distance-expect-share-betray"}}} | |||||||||
| node3 | 3m 9.989s | 2025-12-23 14:24:01.499 | 4404 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 9.989s | 2025-12-23 14:24:01.499 | 4405 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 354 File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node3 | 3m 9.990s | 2025-12-23 14:24:01.500 | 4406 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 3m 9.999s | 2025-12-23 14:24:01.509 | 4407 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 3m 10.000s | 2025-12-23 14:24:01.510 | 4408 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 384 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/384 {"round":384,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/384/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 3m 10.008s | 2025-12-23 14:24:01.518 | 4385 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node4 | 3m 10.010s | 2025-12-23 14:24:01.520 | 4386 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 384 Timestamp: 2025-12-23T14:24:00.075355Z Next consensus number: 13604 Legacy running event hash: 848326aa6269cd9d23c99f1e635a6472b89892924043a7cc6901065a42c853def1f1c8b573e53293e89e250d83b444c8 Legacy running event mnemonic: hurry-stem-essay-erase Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1872318141 Root hash: d91d7dcc542f3b00e72936a8794345efe7342b387377e6bc8712689af31c119f1160524c05b8d45ef2c59c26af852855 (root) VirtualMap state / symbol-ridge-magnet-gas {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"gallery-slim-shadow-goat"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"media-apple-lobster-harsh"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"distance-expect-share-betray"}}} | |||||||||
| node4 | 3m 10.019s | 2025-12-23 14:24:01.529 | 4387 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 3m 10.020s | 2025-12-23 14:24:01.530 | 4388 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 354 File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node4 | 3m 10.020s | 2025-12-23 14:24:01.530 | 4389 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 3m 10.030s | 2025-12-23 14:24:01.540 | 4390 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 3m 10.030s | 2025-12-23 14:24:01.540 | 4391 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 384 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/384 {"round":384,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/384/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 3m 10.056s | 2025-12-23 14:24:01.566 | 4408 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/22 for 384 | |
| node0 | 3m 10.058s | 2025-12-23 14:24:01.568 | 4409 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 384 Timestamp: 2025-12-23T14:24:00.075355Z Next consensus number: 13604 Legacy running event hash: 848326aa6269cd9d23c99f1e635a6472b89892924043a7cc6901065a42c853def1f1c8b573e53293e89e250d83b444c8 Legacy running event mnemonic: hurry-stem-essay-erase Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1872318141 Root hash: d91d7dcc542f3b00e72936a8794345efe7342b387377e6bc8712689af31c119f1160524c05b8d45ef2c59c26af852855 (root) VirtualMap state / symbol-ridge-magnet-gas {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"gallery-slim-shadow-goat"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"media-apple-lobster-harsh"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"distance-expect-share-betray"}}} | |||||||||
| node0 | 3m 10.067s | 2025-12-23 14:24:01.577 | 4410 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 10.068s | 2025-12-23 14:24:01.578 | 4411 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 354 File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 3m 10.068s | 2025-12-23 14:24:01.578 | 4412 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 3m 10.082s | 2025-12-23 14:24:01.592 | 4413 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 3m 10.082s | 2025-12-23 14:24:01.592 | 4414 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 384 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/384 {"round":384,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/384/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 3m 16.789s | 2025-12-23 14:24:08.299 | 4614 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 1 to 4>> | NetworkUtils: | Connection broken: 1 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:24:08.298559925Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node2 | 3m 16.789s | 2025-12-23 14:24:08.299 | 4563 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 2 to 4>> | NetworkUtils: | Connection broken: 2 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:24:08.298295626Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node0 | 3m 16.791s | 2025-12-23 14:24:08.301 | 4592 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 0 to 4>> | NetworkUtils: | Connection broken: 0 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:24:08.297837668Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node3 | 3m 16.792s | 2025-12-23 14:24:08.302 | 4566 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith4 3 to 4>> | NetworkUtils: | Connection broken: 3 -> 4 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:24:08.297969307Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node1 | 4m 9.416s | 2025-12-23 14:25:00.926 | 5979 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 521 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 9.440s | 2025-12-23 14:25:00.950 | 5898 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 521 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 4m 9.470s | 2025-12-23 14:25:00.980 | 5955 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 521 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 4m 9.471s | 2025-12-23 14:25:00.981 | 5901 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 521 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 4m 9.619s | 2025-12-23 14:25:01.129 | 5901 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 521 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/521 | |
| node2 | 4m 9.619s | 2025-12-23 14:25:01.129 | 5902 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node3 | 4m 9.651s | 2025-12-23 14:25:01.161 | 5904 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 521 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/521 | |
| node3 | 4m 9.651s | 2025-12-23 14:25:01.161 | 5905 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node2 | 4m 9.700s | 2025-12-23 14:25:01.210 | 5933 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node2 | 4m 9.703s | 2025-12-23 14:25:01.213 | 5934 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 521 Timestamp: 2025-12-23T14:25:00.078463Z Next consensus number: 17120 Legacy running event hash: b17045af5e987e34c789c44b1df44e06039aa2f6df4279fd7955bf7762b1e30323a105ad3ad6887a538d29f47ba81326 Legacy running event mnemonic: spirit-remove-spread-shoot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1039521 Root hash: 83688d701c12b3ea4c4a236ce5771bdbb34b825907308b69acf28538e46728aaf3b91e453acf4ea7ebc0d74935613afb (root) VirtualMap state / narrow-faith-liquid-lizard {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"salad-floor-write-liberty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tiger-laptop-oven-swing"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"blouse-cover-parade-kite"}}} | |||||||||
| node2 | 4m 9.709s | 2025-12-23 14:25:01.219 | 5943 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 4m 9.709s | 2025-12-23 14:25:01.219 | 5944 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus event files meeting specified criteria to copy. | |
| Lower bound: 494 First file to copy: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces Last file to copy: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 4m 9.709s | 2025-12-23 14:25:01.219 | 5945 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 2 preconsensus event file(s) | |
| node1 | 4m 9.716s | 2025-12-23 14:25:01.226 | 5982 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 521 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/521 | |
| node1 | 4m 9.717s | 2025-12-23 14:25:01.227 | 5983 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node2 | 4m 9.721s | 2025-12-23 14:25:01.231 | 5946 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 2 preconsensus event file(s) | |
| node2 | 4m 9.722s | 2025-12-23 14:25:01.232 | 5947 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 521 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/521 {"round":521,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/521/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 4m 9.735s | 2025-12-23 14:25:01.245 | 5944 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node3 | 4m 9.737s | 2025-12-23 14:25:01.247 | 5945 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 521 Timestamp: 2025-12-23T14:25:00.078463Z Next consensus number: 17120 Legacy running event hash: b17045af5e987e34c789c44b1df44e06039aa2f6df4279fd7955bf7762b1e30323a105ad3ad6887a538d29f47ba81326 Legacy running event mnemonic: spirit-remove-spread-shoot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1039521 Root hash: 83688d701c12b3ea4c4a236ce5771bdbb34b825907308b69acf28538e46728aaf3b91e453acf4ea7ebc0d74935613afb (root) VirtualMap state / narrow-faith-liquid-lizard {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"salad-floor-write-liberty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tiger-laptop-oven-swing"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"blouse-cover-parade-kite"}}} | |||||||||
| node3 | 4m 9.744s | 2025-12-23 14:25:01.254 | 5946 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 4m 9.744s | 2025-12-23 14:25:01.254 | 5947 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus event files meeting specified criteria to copy. | |
| Lower bound: 494 First file to copy: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces Last file to copy: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 4m 9.744s | 2025-12-23 14:25:01.254 | 5948 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 2 preconsensus event file(s) | |
| node3 | 4m 9.756s | 2025-12-23 14:25:01.266 | 5949 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 2 preconsensus event file(s) | |
| node3 | 4m 9.757s | 2025-12-23 14:25:01.267 | 5950 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 521 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/521 {"round":521,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/521/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 4m 9.806s | 2025-12-23 14:25:01.316 | 6022 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node1 | 4m 9.808s | 2025-12-23 14:25:01.318 | 6023 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 521 Timestamp: 2025-12-23T14:25:00.078463Z Next consensus number: 17120 Legacy running event hash: b17045af5e987e34c789c44b1df44e06039aa2f6df4279fd7955bf7762b1e30323a105ad3ad6887a538d29f47ba81326 Legacy running event mnemonic: spirit-remove-spread-shoot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1039521 Root hash: 83688d701c12b3ea4c4a236ce5771bdbb34b825907308b69acf28538e46728aaf3b91e453acf4ea7ebc0d74935613afb (root) VirtualMap state / narrow-faith-liquid-lizard {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"salad-floor-write-liberty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tiger-laptop-oven-swing"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"blouse-cover-parade-kite"}}} | |||||||||
| node1 | 4m 9.818s | 2025-12-23 14:25:01.328 | 6024 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 4m 9.818s | 2025-12-23 14:25:01.328 | 6025 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus event files meeting specified criteria to copy. | |
| Lower bound: 494 First file to copy: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces Last file to copy: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 4m 9.818s | 2025-12-23 14:25:01.328 | 6026 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 2 preconsensus event file(s) | |
| node1 | 4m 9.831s | 2025-12-23 14:25:01.341 | 6027 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 2 preconsensus event file(s) | |
| node1 | 4m 9.832s | 2025-12-23 14:25:01.342 | 6028 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 521 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/521 {"round":521,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/521/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 4m 9.836s | 2025-12-23 14:25:01.346 | 5968 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 521 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/521 | |
| node0 | 4m 9.837s | 2025-12-23 14:25:01.347 | 5969 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node0 | 4m 9.915s | 2025-12-23 14:25:01.425 | 6011 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/29 for 521 | |
| node0 | 4m 9.917s | 2025-12-23 14:25:01.427 | 6012 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 521 Timestamp: 2025-12-23T14:25:00.078463Z Next consensus number: 17120 Legacy running event hash: b17045af5e987e34c789c44b1df44e06039aa2f6df4279fd7955bf7762b1e30323a105ad3ad6887a538d29f47ba81326 Legacy running event mnemonic: spirit-remove-spread-shoot Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1039521 Root hash: 83688d701c12b3ea4c4a236ce5771bdbb34b825907308b69acf28538e46728aaf3b91e453acf4ea7ebc0d74935613afb (root) VirtualMap state / narrow-faith-liquid-lizard {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"salad-floor-write-liberty"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"tiger-laptop-oven-swing"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"blouse-cover-parade-kite"}}} | |||||||||
| node0 | 4m 9.924s | 2025-12-23 14:25:01.434 | 6013 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 4m 9.924s | 2025-12-23 14:25:01.434 | 6014 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus event files meeting specified criteria to copy. | |
| Lower bound: 494 First file to copy: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces Last file to copy: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 4m 9.924s | 2025-12-23 14:25:01.434 | 6015 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 2 preconsensus event file(s) | |
| node0 | 4m 9.936s | 2025-12-23 14:25:01.446 | 6016 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 2 preconsensus event file(s) | |
| node0 | 4m 9.937s | 2025-12-23 14:25:01.447 | 6017 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 521 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/521 {"round":521,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/521/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 9.541s | 2025-12-23 14:26:01.051 | 7508 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 660 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 5m 9.630s | 2025-12-23 14:26:01.140 | 7485 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 660 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 5m 9.636s | 2025-12-23 14:26:01.146 | 7678 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 660 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 5m 9.691s | 2025-12-23 14:26:01.201 | 7542 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 660 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 5m 9.814s | 2025-12-23 14:26:01.324 | 7488 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 660 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/660 | |
| node2 | 5m 9.815s | 2025-12-23 14:26:01.325 | 7489 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node0 | 5m 9.844s | 2025-12-23 14:26:01.354 | 7545 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 660 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/660 | |
| node0 | 5m 9.844s | 2025-12-23 14:26:01.354 | 7546 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node2 | 5m 9.894s | 2025-12-23 14:26:01.404 | 7520 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node2 | 5m 9.896s | 2025-12-23 14:26:01.406 | 7521 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 660 Timestamp: 2025-12-23T14:26:00.263969364Z Next consensus number: 20440 Legacy running event hash: 3415313c4db3fa1e9fd0d265277464acddbd5968f22d9942e506817015f8194d3e4f49917c7e52f216954d394a6dfd52 Legacy running event mnemonic: live-chest-reopen-rough Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -437192173 Root hash: ca7915185980243a6fa6dc39311e73ac94787788aa9ca8b907376db4174798a5b22a21b20f07e072718f663245b8644a (root) VirtualMap state / rail-perfect-copy-mansion {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"fine-nuclear-now-possible"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"appear-reform-rookie-win"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"cinnamon-lamp-primary-eye"}}} | |||||||||
| node3 | 5m 9.898s | 2025-12-23 14:26:01.408 | 7511 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 660 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/660 | |
| node3 | 5m 9.898s | 2025-12-23 14:26:01.408 | 7512 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node2 | 5m 9.902s | 2025-12-23 14:26:01.412 | 7522 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 5m 9.902s | 2025-12-23 14:26:01.412 | 7523 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 633 File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 5m 9.903s | 2025-12-23 14:26:01.413 | 7524 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 5m 9.905s | 2025-12-23 14:26:01.415 | 7533 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 5m 9.906s | 2025-12-23 14:26:01.416 | 7534 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 660 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/660 {"round":660,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/660/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 5m 9.907s | 2025-12-23 14:26:01.417 | 7535 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/1 | |
| node0 | 5m 9.924s | 2025-12-23 14:26:01.434 | 7585 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node0 | 5m 9.926s | 2025-12-23 14:26:01.436 | 7586 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 660 Timestamp: 2025-12-23T14:26:00.263969364Z Next consensus number: 20440 Legacy running event hash: 3415313c4db3fa1e9fd0d265277464acddbd5968f22d9942e506817015f8194d3e4f49917c7e52f216954d394a6dfd52 Legacy running event mnemonic: live-chest-reopen-rough Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -437192173 Root hash: ca7915185980243a6fa6dc39311e73ac94787788aa9ca8b907376db4174798a5b22a21b20f07e072718f663245b8644a (root) VirtualMap state / rail-perfect-copy-mansion {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"fine-nuclear-now-possible"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"appear-reform-rookie-win"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"cinnamon-lamp-primary-eye"}}} | |||||||||
| node0 | 5m 9.934s | 2025-12-23 14:26:01.444 | 7587 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 5m 9.934s | 2025-12-23 14:26:01.444 | 7588 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 633 File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 5m 9.934s | 2025-12-23 14:26:01.444 | 7589 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 5m 9.937s | 2025-12-23 14:26:01.447 | 7590 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 5m 9.938s | 2025-12-23 14:26:01.448 | 7591 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 660 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/660 {"round":660,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/660/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node0 | 5m 9.940s | 2025-12-23 14:26:01.450 | 7592 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/1 | |
| node1 | 5m 9.944s | 2025-12-23 14:26:01.454 | 7681 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 660 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/660 | |
| node1 | 5m 9.945s | 2025-12-23 14:26:01.455 | 7682 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node3 | 5m 9.987s | 2025-12-23 14:26:01.497 | 7554 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node3 | 5m 9.989s | 2025-12-23 14:26:01.499 | 7555 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 660 Timestamp: 2025-12-23T14:26:00.263969364Z Next consensus number: 20440 Legacy running event hash: 3415313c4db3fa1e9fd0d265277464acddbd5968f22d9942e506817015f8194d3e4f49917c7e52f216954d394a6dfd52 Legacy running event mnemonic: live-chest-reopen-rough Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -437192173 Root hash: ca7915185980243a6fa6dc39311e73ac94787788aa9ca8b907376db4174798a5b22a21b20f07e072718f663245b8644a (root) VirtualMap state / rail-perfect-copy-mansion {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"fine-nuclear-now-possible"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"appear-reform-rookie-win"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"cinnamon-lamp-primary-eye"}}} | |||||||||
| node3 | 5m 9.996s | 2025-12-23 14:26:01.506 | 7556 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 5m 9.997s | 2025-12-23 14:26:01.507 | 7557 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 633 File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 5m 9.997s | 2025-12-23 14:26:01.507 | 7558 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 5m 10.000s | 2025-12-23 14:26:01.510 | 7559 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 5m 10.000s | 2025-12-23 14:26:01.510 | 7560 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 660 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/660 {"round":660,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/660/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 5m 10.002s | 2025-12-23 14:26:01.512 | 7561 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/1 | |
| node1 | 5m 10.034s | 2025-12-23 14:26:01.544 | 7721 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/36 for 660 | |
| node1 | 5m 10.036s | 2025-12-23 14:26:01.546 | 7722 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 660 Timestamp: 2025-12-23T14:26:00.263969364Z Next consensus number: 20440 Legacy running event hash: 3415313c4db3fa1e9fd0d265277464acddbd5968f22d9942e506817015f8194d3e4f49917c7e52f216954d394a6dfd52 Legacy running event mnemonic: live-chest-reopen-rough Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -437192173 Root hash: ca7915185980243a6fa6dc39311e73ac94787788aa9ca8b907376db4174798a5b22a21b20f07e072718f663245b8644a (root) VirtualMap state / rail-perfect-copy-mansion {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"fine-nuclear-now-possible"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"appear-reform-rookie-win"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"cinnamon-lamp-primary-eye"}}} | |||||||||
| node1 | 5m 10.044s | 2025-12-23 14:26:01.554 | 7723 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 5m 10.044s | 2025-12-23 14:26:01.554 | 7724 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 633 File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 5m 10.044s | 2025-12-23 14:26:01.554 | 7725 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 5m 10.047s | 2025-12-23 14:26:01.557 | 7726 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 5m 10.048s | 2025-12-23 14:26:01.558 | 7727 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 660 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/660 {"round":660,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/660/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 5m 10.049s | 2025-12-23 14:26:01.559 | 7728 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/1 | |
| node4 | 6m 2.933s | 2025-12-23 14:26:54.443 | 1 | INFO | STARTUP | <main> | StaticPlatformBuilder: | ||
| ////////////////////// // Node is Starting // ////////////////////// | |||||||||
| node4 | 6m 3.041s | 2025-12-23 14:26:54.551 | 2 | DEBUG | STARTUP | <main> | StaticPlatformBuilder: | main() started {} [com.swirlds.logging.legacy.payload.NodeStartPayload] | |
| node4 | 6m 3.060s | 2025-12-23 14:26:54.570 | 3 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 6m 3.192s | 2025-12-23 14:26:54.702 | 4 | INFO | STARTUP | <main> | Browser: | The following nodes [4] are set to run locally | |
| node4 | 6m 3.226s | 2025-12-23 14:26:54.736 | 5 | DEBUG | STARTUP | <main> | BootstrapUtils: | Scanning the classpath for RuntimeConstructable classes | |
| node4 | 6m 5.022s | 2025-12-23 14:26:56.532 | 6 | DEBUG | STARTUP | <main> | BootstrapUtils: | Done with registerConstructables, time taken 1795ms | |
| node4 | 6m 5.034s | 2025-12-23 14:26:56.544 | 7 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | constructor called in Main. | |
| node4 | 6m 5.039s | 2025-12-23 14:26:56.549 | 8 | WARN | STARTUP | <main> | PlatformConfigUtils: | Configuration property 'reconnect.asyncOutputStreamFlushMilliseconds' was renamed to 'reconnect.asyncOutputStreamFlush'. This build is currently backwards compatible with the old name, but this may not be true in a future release, so it is important to switch to the new name. | |
| node4 | 6m 5.089s | 2025-12-23 14:26:56.599 | 9 | INFO | STARTUP | <main> | PrometheusEndpoint: | PrometheusEndpoint: Starting server listing on port: 9999 | |
| node4 | 6m 5.159s | 2025-12-23 14:26:56.669 | 10 | WARN | STARTUP | <main> | CryptoStatic: | There are no keys on disk, Adhoc keys will be generated, but this is incompatible with DAB. | |
| node4 | 6m 5.160s | 2025-12-23 14:26:56.670 | 11 | DEBUG | STARTUP | <main> | CryptoStatic: | Started generating keys | |
| node4 | 6m 6.058s | 2025-12-23 14:26:57.568 | 12 | DEBUG | STARTUP | <main> | CryptoStatic: | Done generating keys | |
| node4 | 6m 6.171s | 2025-12-23 14:26:57.681 | 15 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6m 6.180s | 2025-12-23 14:26:57.690 | 16 | INFO | STARTUP | <main> | StartupStateUtils: | The following saved states were found on disk: | |
| - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/384 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/249 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/119 - /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |||||||||
| node4 | 6m 6.181s | 2025-12-23 14:26:57.691 | 17 | INFO | STARTUP | <main> | StartupStateUtils: | Loading latest state from disk. | |
| node4 | 6m 6.182s | 2025-12-23 14:26:57.692 | 18 | INFO | STARTUP | <main> | StartupStateUtils: | Loading signed state from disk: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/384 | |
| node4 | 6m 6.192s | 2025-12-23 14:26:57.702 | 19 | INFO | STATE_TO_DISK | <main> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp | |
| node4 | 6m 6.330s | 2025-12-23 14:26:57.840 | 29 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 6m 7.232s | 2025-12-23 14:26:58.742 | 31 | INFO | STARTUP | <main> | StartupStateUtils: | Loaded state's hash is the same as when it was saved. | |
| node4 | 6m 7.239s | 2025-12-23 14:26:58.749 | 32 | INFO | STARTUP | <main> | StartupStateUtils: | Platform has loaded a saved state {"round":384,"consensusTimestamp":"2025-12-23T14:24:00.075355Z"} [com.swirlds.logging.legacy.payload.SavedStateLoadedPayload] | |
| node4 | 6m 7.246s | 2025-12-23 14:26:58.756 | 35 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6m 7.247s | 2025-12-23 14:26:58.757 | 38 | INFO | STARTUP | <main> | BootstrapUtils: | Not upgrading software, current software is version SemanticVersion[major=1, minor=0, patch=0, pre=, build=]. | |
| node4 | 6m 7.256s | 2025-12-23 14:26:58.766 | 39 | INFO | STARTUP | <main> | AddressBookInitializer: | Using the loaded state's address book and weight values. | |
| node4 | 6m 7.267s | 2025-12-23 14:26:58.777 | 40 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6m 7.270s | 2025-12-23 14:26:58.780 | 41 | INFO | STARTUP | <main> | ConsistencyTestingToolMain: | returning software version SemanticVersion[major=1, minor=0, patch=0, pre=, build=] | |
| node4 | 6m 8.360s | 2025-12-23 14:26:59.870 | 42 | INFO | STARTUP | <main> | OSHealthChecker: | ||
| PASSED - Clock Source Speed Check Report[callsPerSec=26147183] PASSED - Entropy Check Report[success=true, entropySource=Strong Instance, elapsedNanos=278590, randomLong=-3045112065568860734, exception=null] PASSED - Entropy Check Report[success=true, entropySource=NativePRNGBlocking:SUN, elapsedNanos=36320, randomLong=-4898307746683801415, exception=null] PASSED - File System Check Report[code=SUCCESS, readNanos=1165088, data=35, exception=null] OS Health Check Report - Complete (took 1028 ms) | |||||||||
| node4 | 6m 8.397s | 2025-12-23 14:26:59.907 | 43 | DEBUG | STARTUP | <main> | BootstrapUtils: | jvmPauseDetectorThread started | |
| node4 | 6m 8.542s | 2025-12-23 14:27:00.052 | 44 | INFO | STARTUP | <main> | PcesUtilities: | Span compaction completed for data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr501_orgn0.pces, new upper bound is 399 | |
| node4 | 6m 8.548s | 2025-12-23 14:27:00.058 | 45 | INFO | STARTUP | <main> | StandardScratchpad: | Scratchpad platform.iss contents: | |
| LAST_ISS_ROUND null | |||||||||
| node4 | 6m 8.550s | 2025-12-23 14:27:00.060 | 46 | INFO | STARTUP | <main> | PlatformBuilder: | Default platform pool parallelism: 8 | |
| node4 | 6m 8.649s | 2025-12-23 14:27:00.159 | 47 | INFO | STARTUP | <main> | SwirldsPlatform: | Starting with roster history: | |
| RosterHistory[ currentRosterRound: 0 ][ no previous roster set ] Current Roster: { "rosterEntries": [{ "weight": "12500000000", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIdUmpLKzyXgUwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlMTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlMTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALXCoDQ+HOVsEDTZpFuJITSaGwaKX2is5K1P/lV+G+ll6u36IdqKNnZIirJrpX2N0Ad6NeF/oFcMhietrKt818PDA9Tbb2tqcHNKTxxZAEj7amQTsrU4EsNmUhaPgMs89yj9WLxCXVzW05cQjqYEA/hymzohWs1BdU3Y2KdmELe0v5fzRgDpNgYHhUN7IrlrlgXEWpuKRskBYc4PIvyACijY0/zkeEAyHOshYYGKhQbNm/NGWhFq83ro77CZZhX3Vl7hRnHLaEoCEE8atY8R1Txhy8aObhiS6R8ZVRTkZLar/FG/xe78RQfwHHD1al2w5oHR7xgTZylhbD+nVQ09Zmi25USpvqwumbMBE0OWhV+VH1WLCHfLQs6/5yuDjeZ/0D9tpQ8pfkiEkGLedzUzQkq+4/HmN4IFTOhgJHlu1tVUqohZIPZ5zSzqkqFzFQGRo2uAX8C2EJ3qgQMAEOpH8iOjiSKsezlIPuwvmrVDPxVfpY2Cq60oxRu6B8bZdbQkfwIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQAloxwiVu7pBhkO4fLqYRw4FC0VEx+c47W4xnrq3G/uXMGwE2Mfwple9FZnfT9JgSoT1UVw+cigo4720WdrPqkK8qnA3/PzGXlfJ3k6eFcBuli/KY1TakIJUAxFt5biNKatheMwAKsbF/JyVyaqG2dbSaXQ6hZBLQTYmLrmFWMvi9QdM1S8vNVMjn0hE2qQJtnVRuVwqRaAQ225jDv2CUCT28t0EWE6ccbiRi74l8KoW1Lo3v2EQ6ZZ89Xt3CwFSQHa6YVT685ECy82qMysU+YHBe9WmwJW05UAAY7JRsOo+RuuU/r4acNLmzprG+l7qsqqPkwXTcziw9Y2OYsFgY4bTlIOV0JC0AYApctDB3gbn83LM73CWccGrXq0liSV0wL11wscH3gFohXrwb646+6hgncZiDshlZlWaFSkHQJAxTR9bsbsCwKdZpzIIVOVTOT/3oLQKCCQvPriTpJiNa0P6gB0pq64lNcyG9fL8vS3YFFnWJTZwb8ZzGK+LZ91/2Y=", "gossipEndpoint": [{ "ipAddressV4": "I8GRmQ==", "port": 30124 }, { "ipAddressV4": "CoAAWg==", "port": 30124 }] }, { "nodeId": "1", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJguXwyGFpb8MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTIwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTIwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDYXoYHBtw8adD5sxLZSnlG9XgBLWVbIDl3YA4rZZ11cgl6FG2TvF8UVNXQ177cRm1xUUJRI5ulSgDofnm7Iuf6c/GoQrud2nP1yMWewGslwiEi1h2pxbN7doFvn/92Y0lJVwSV/vOpbIyPRoMeF0jXd7TEI7dYj4S7gV9uWmQCIWjwTZqVsjIAtzEkYnmS0/m5XuD9MJsin8OQRu/PEFL8qaVPQJ2GhOhpUJqvADQ/Lsq/FHcPjylcRcnUQlFRojk2jqugtoRegByjPrAOSYGJeWUCVYmd7W51L/AkVx1rDLeHj0zLTTzQRF5G56i+S+tAcpY/uiCrwLvszFlDlD1diOuaucmu54lalrSTlVe5eOyq2ga2tKi11LQ+w09105zLyRWk7DBU93f5dTYNSmokI7b4sVRxu6SP0p/F9wND77wv2Ax5OpIWWty8zy8Y+xOuRyFu/rJ4ddDmRYvRmptM0rCAfv6hgd3m5Y/OAadQm/OuN91Uq9PIJdlMtjDbIfECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEANutmL3V1PlvlsZ6xG8Sx9cKTok3kf3rBf7D7eE8Nn8ryHi3cw9CvCaj1E6zmTTh9k23DAZVWulhjTY5GWcx5NO7QAWjKau44g/HecNNrWsD/+nIrhmAk2WxKp175CwqJaIWA7CM6VMfFktjaflUPcB6RJnHrAa8M1HUpEsBz0mFmLz7lIaDemxYCE8M8slb6wTMjpL83GB+ejudRe7YK2ZWixM+CGp0ARkV+EecHaCXgEoROUNwP6mZVJcgSVR1QBQwcGAMIrutsKENM8HR9o3LWacigoJXf+IX8c6aJhrHfFvm62q+hi3baj7iR6gebEdWPtmEXgoVWOk230fLGyPU1oBxaDdYa8V4+ZFv03O91By9tuFrwZOcLCb4CPRyr8A47lHNjRIeo2nUF/c+SjV0eBcPKCnn1nW/AQWCxJ0QzzG6tEeMAGdDrE2ujPlB+Y9Sn8vB0zjYQHTr1NKyyXNogB4y48jofLDLDGOQYI6uP2fDgZeiq4dV8w91WbPHV", "gossipEndpoint": [{ "ipAddressV4": "iHSdeQ==", "port": 30125 }, { "ipAddressV4": "CoAAXQ==", "port": 30125 }] }, { "nodeId": "2", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAJwswl59m488MA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTMwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDAqlNMpfduuW0ETQVjdKf5ZBe3Ug/ybRMoCWIlue8UoxFzamAtoeFEW3GVi862iImRVyHbkBZzDQUw4ABwMdxfzTL9voozkMaOZb4KQ9yZ9zNLAAmSSuE6RFmSJnBtfufxFXqiu6esbcvyropjZLc65F2uoMCpKN0CHFpWEb2GZAaipp7WCOon0NllDLqkjPylluXO4mjbzzMSDPbBWRD8VjjkxZeszWSXYxz9hqcRYX01CGg+jhooCQ6j2yB8sfFAffIeTG6GSV1uCFa4san2emhQWpr+cHaVYJMtejL43HaEVQnF3vh5Z10T/7co63C63aay2hs6Bx5SschosyYiafI7GtbQ4qpOgjEDFT1jlydK21gy6MV3SFEYwcUfxvxxRj6pS7xiMFn4FYnBKPJWkaDkwTqboEshxstvASQOW993uEwzh4EjctRHSjSuTU6S9OsWi5I5cRF+xK6GaWsTp0KyO8uVpuM9kZfpOcor294quyKJ9nylNyIt/m8Q8/ECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAqPLB/xr0Yv1l9w/RO+bqFtl8TkxF/6jOqoEUXY06dEInopLYpmkksZZ9G8vebt6hAoLjaxNMdRqCkzKgy4jn7/SQZNV9FMbZ7ckiDxsBxYZ2ZaBootuWzzVD6hCSO3Tg6JgkIzldtFtNcDVBRgZnHg+Rl6hn+gFV5S2OTTTPHWK7GHwgHXLhK7N0RL4YVrRCi/HTUZnuYCjBwvdDte5iqytY05cAO4p72P6YtDaOdAfL/IIKd1ylCWITDqTp/JDBz1uxjQmsXLVD/KEEtlvYlGjIr+wUUqIUPhFvB6ajl2NO0D/r+t1BH454zbodU92QnOJpXpoNuOv7jjALHCqo70mCSwTNUSZuVP6/KLmQe8sSzYs7O/c25FzHKBYy+aZujoa/X7aI6XVmsUkj6ae9MSvQurk0jMNg/Jy5EtWOMy7WEuyadrAv6KSP3oIfmL9jWoPcyOMfvjRHxGqOfZuFZatAwswY6O0E3ATTrN03t/BVqNHIYIXc6UOiUTo2Nx56", "gossipEndpoint": [{ "ipAddressV4": "iHeLPg==", "port": 30126 }, { "ipAddressV4": "CoAAXA==", "port": 30126 }] }, { "nodeId": "3", "weight": "12500000000", "gossipCaCertificate": "MIIDpzCCAg+gAwIBAgIJAOxH0o7YkAUoMA0GCSqGSIb3DQEBDAUAMBIxEDAOBgNVBAMTB3Mtbm9kZTQwIBcNMDAwMTAxMDAwMDAwWhgPMjEwMDAxMDEwMDAwMDBaMBIxEDAOBgNVBAMTB3Mtbm9kZTQwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDf6+SJl+puqRNd5r2Tb802jQTqPm7k3NXIeU8NQ3Hy9p0G+9p4Hgnt3ftipar7lKPKnp4PFrOP7E7XSKpafxK2OVQ0jTMvc6Yjqt+9mzyNSI1I8cSHTmhJ7kMBt0+NwVM8QN+fbKcbQaoNiPwMcckVtGeMad4aZM6hRyxzI0H3wgMj4JiM9VRwx7JbEo3R7akRwLwGr9ZQm2EQwqiyReNkBnXrsyP4KPPVAoeMfGchoAuBbV+r6v1OeYddocYmZkrsvMXUKF/uEcgd8gTu+pv3jObwIEVqXo1yC6ZlCFqO7LIvT8jTAAljkszoo67ykXTbKS0PZeLDg6nvdPvBMQ50yjfswR88S6N8VU6pud7Y+VbMYUiGzlrFi4MB9dikAjEj4PEetQyZdn84ZXGxerXlU/vTO2Fp4i1ec5rmX1P0WYMlbNELE408j5nfCfzD/qdcF5HZAiUVTYU/SWpzWcn34++KGpuqZZQdsGwCLQWeMeA/OEemYChis4cO94aOzrECAwEAATANBgkqhkiG9w0BAQwFAAOCAYEAlj5YIsbYXk2JGP9kRCBLDgz27ymYi1KDbO8g18V4T0zj2Zl7858U7mF9UBSSW+Cjl1UtUdvqFWZhh8jRoO3Jov1QGTULHRfyyPElD4VpwFribiu4GYJaodYy6NE50WwSJf32gLG0jHQWt7q+cOrn6WaG2h8O1sIxbTlnu1kqKQUQtu4oX8u23b5m9QXVJfJVdecwD5Rmab2d3dq/NNv2iNELH0myqtcoqw26xwIvXwaS4Gqi+Y0cOfjWL5Gv5AHIwvBXGIh3KUU7pbyBzqjkigbzSeoZw0C8G2cRTl0+QTuet2SVYlFh5J9/FBLvIfMfIpguglaU6xTVoRpo7RF24qQKFt2IlBROpqcwl0FyfE+2c19FGt1V8E5dYqE4T2mHT6FSOI3DckA2afBm1OCeMNtkqCQT8x+JvdKrgUh44QDm4PIVZDzaxog/zOzRWPCgpCPq0HcNMzgCVFt+4q8eTL9Ju/rQcS9bDosjMA69NGLIOCdPW2i/gkS9x9rTXgyp", "gossipEndpoint": [{ "ipAddressV4": "IkeHcg==", "port": 30127 }, { "ipAddressV4": "CoAAXg==", "port": 30127 }] }, { "nodeId": "4", "gossipCaCertificate": "MIIDpjCCAg6gAwIBAgIIXlngkVEv6iMwDQYJKoZIhvcNAQEMBQAwEjEQMA4GA1UEAxMHcy1ub2RlNTAgFw0wMDAxMDEwMDAwMDBaGA8yMTAwMDEwMTAwMDAwMFowEjEQMA4GA1UEAxMHcy1ub2RlNTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL4o3FK8th1cG+FSlw4iT9FlkwK+hOj4Ay6Z70mZlsNwszgxvddUEO4BEdA1iSWfxkYOLl4QwwPr3l394a07VfB5OK3dqJ6CjVdByyvzghtk3gOpkskWlJxp6vah7BbIJFWE8off7fhCdwAGSrwIRdGE8u8GbKJIdHk6/XyjB3j0BXTIgeaPTJxLeuz/2l/dQVRMXyZNxlc5UVQYnX9haMRk7M5bkb9uwfYPRikEJFp6G72x7M7Q9lBGJ3ArCQn/lPJfHSg01GxfDhWH8DOwLaFdv1bCs2zHTn7R7Wq9ymXvkUsZhlYO4mLR8HKDcM3sCrJa2rg8vgnIoZupHABKxkgtT2wxV7fM5f2oiz0mDYDTRJpgmK1lmNANj2tKnGqeDnsW7Q3zwufgZZhbks8+8uigyOyKNbp6D7Vv5KeYRibjr/xh+yWT0v02dtpBIdhqDa5CUVD9fCwigZj3PQc8N4e47ZL6s1pXpQ6Cf0lB0fSsvyhnGRa8HMx2q5eg5j/lCQIDAQABMA0GCSqGSIb3DQEBDAUAA4IBgQCr9yUzOoi0xhoDE1mqR3FR/iVCq9PaBUURWL743LDMrlEvpzKX0upcwwwdgJFjVqVUywh6rKeHQt4O4UV6FIbpp0PSjSE7XZSK3UNqnhZJhQ3aNrOP+6wBhm2B0ZjrxyMS1EWeD9tcNkdYluO00RlieAEV4zwoAfeFPSB21iXW5dU8idhNuTLptDc7SJoErxN+44jvcrSe/ZhpQohG6WfyDPH0BE1tyzsiD29PAWKkrfhg5kzjTAP/qFp+ByazeltP9/F0NXI5AHbE0pKYr56XUlwDfDZOTU9b1YeS7kKyPvccvC2j9NjGGM7NjafdFLHUTYBZiNUTZXVstddYtTCVbTqI7I/x6hoeeNVDZv7XluwZLrYsDNsNrWU3c9VijPK1CE5Owy+gJoGgxEHfA/n9Jvc3lEesqKBpW92RazkpHW2eD9wh8Ayv3q6PNDGzWyiXA8YWW6yD/dIp2Oh8szZUfOXy8sQ8VW86T6RsqGP5CKKPGW1NnP/KTKe5/WoBLZQ=", "gossipEndpoint": [{ "ipAddressV4": "IkSEKg==", "port": 30128 }, { "ipAddressV4": "CoAAWw==", "port": 30128 }] }] } | |||||||||
| node4 | 6m 8.680s | 2025-12-23 14:27:00.190 | 48 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with state long 5010118862946555570. | |
| node4 | 6m 8.681s | 2025-12-23 14:27:00.191 | 49 | INFO | STARTUP | <main> | ConsistencyTestingToolState: | State initialized with 384 rounds handled. | |
| node4 | 6m 8.682s | 2025-12-23 14:27:00.192 | 50 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6m 8.682s | 2025-12-23 14:27:00.192 | 51 | INFO | STARTUP | <main> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 6m 8.732s | 2025-12-23 14:27:00.242 | 52 | INFO | STARTUP | <main> | StateInitializer: | The platform is using the following initial state: | |
| Round: 384 Timestamp: 2025-12-23T14:24:00.075355Z Next consensus number: 13604 Legacy running event hash: 848326aa6269cd9d23c99f1e635a6472b89892924043a7cc6901065a42c853def1f1c8b573e53293e89e250d83b444c8 Legacy running event mnemonic: hurry-stem-essay-erase Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 1872318141 Root hash: d91d7dcc542f3b00e72936a8794345efe7342b387377e6bc8712689af31c119f1160524c05b8d45ef2c59c26af852855 (root) VirtualMap state / symbol-ridge-magnet-gas {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"gallery-slim-shadow-goat"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"media-apple-lobster-harsh"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"distance-expect-share-betray"}}} | |||||||||
| node4 | 6m 8.739s | 2025-12-23 14:27:00.249 | 54 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Starting the ReconnectController | |
| node4 | 6m 8.962s | 2025-12-23 14:27:00.472 | 55 | INFO | EVENT_STREAM | <main> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: 848326aa6269cd9d23c99f1e635a6472b89892924043a7cc6901065a42c853def1f1c8b573e53293e89e250d83b444c8 | |
| node4 | 6m 8.974s | 2025-12-23 14:27:00.484 | 56 | INFO | STARTUP | <platformForkJoinThread-4> | Shadowgraph: | Shadowgraph starting from expiration threshold 354 | |
| node4 | 6m 8.980s | 2025-12-23 14:27:00.490 | 58 | INFO | STARTUP | <<start-node-4>> | ConsistencyTestingToolMain: | init called in Main for node 4. | |
| node4 | 6m 8.981s | 2025-12-23 14:27:00.491 | 59 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | Starting platform 4 | |
| node4 | 6m 8.982s | 2025-12-23 14:27:00.492 | 60 | INFO | STARTUP | <<platform: recycle-bin-cleanup>> | RecycleBinImpl: | Deleted 0 files from the recycle bin. | |
| node4 | 6m 8.985s | 2025-12-23 14:27:00.495 | 61 | INFO | STARTUP | <<start-node-4>> | CycleFinder: | No cyclical back pressure detected in wiring model. | |
| node4 | 6m 8.987s | 2025-12-23 14:27:00.497 | 62 | INFO | STARTUP | <<start-node-4>> | DirectSchedulerChecks: | No illegal direct scheduler use detected in the wiring model. | |
| node4 | 6m 8.987s | 2025-12-23 14:27:00.497 | 63 | INFO | STARTUP | <<start-node-4>> | InputWireChecks: | All input wires have been bound. | |
| node4 | 6m 8.990s | 2025-12-23 14:27:00.500 | 64 | INFO | STARTUP | <<start-node-4>> | SwirldsPlatform: | replaying preconsensus event stream starting at 354 | |
| node4 | 6m 8.996s | 2025-12-23 14:27:00.506 | 65 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 194.0 ms in STARTING_UP. Now in REPLAYING_EVENTS | |
| node4 | 6m 9.292s | 2025-12-23 14:27:00.802 | 66 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:4 H:c1ae2b472ddf BR:382), num remaining: 4 | |
| node4 | 6m 9.294s | 2025-12-23 14:27:00.804 | 67 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:6a0535fb3a17 BR:382), num remaining: 3 | |
| node4 | 6m 9.294s | 2025-12-23 14:27:00.804 | 68 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:26e6173ae198 BR:383), num remaining: 2 | |
| node4 | 6m 9.295s | 2025-12-23 14:27:00.805 | 69 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:360bf2d08a2a BR:382), num remaining: 1 | |
| node4 | 6m 9.295s | 2025-12-23 14:27:00.805 | 70 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:bb3a1c222cc3 BR:383), num remaining: 0 | |
| node4 | 6m 9.432s | 2025-12-23 14:27:00.942 | 147 | INFO | STARTUP | <<start-node-4>> | PcesReplayer: | Replayed 1,666 preconsensus events with max birth round 399. These events contained 2,347 transactions. 14 rounds reached consensus spanning 6.6 seconds of consensus time. The latest round to reach consensus is round 398. Replay took 440.0 milliseconds. | |
| node4 | 6m 9.437s | 2025-12-23 14:27:00.947 | 149 | INFO | STARTUP | <<app: appMain 4>> | ConsistencyTestingToolMain: | run called in Main. | |
| node4 | 6m 9.439s | 2025-12-23 14:27:00.949 | 150 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 441.0 ms in REPLAYING_EVENTS. Now in OBSERVING | |
| node3 | 6m 9.541s | 2025-12-23 14:27:01.051 | 9157 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 798 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 6m 9.561s | 2025-12-23 14:27:01.071 | 9107 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 798 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node2 | 6m 9.570s | 2025-12-23 14:27:01.080 | 9076 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 798 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 6m 9.637s | 2025-12-23 14:27:01.147 | 9263 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 798 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 6m 9.729s | 2025-12-23 14:27:01.239 | 9266 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 798 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/798 | |
| node1 | 6m 9.730s | 2025-12-23 14:27:01.240 | 9267 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node0 | 6m 9.780s | 2025-12-23 14:27:01.290 | 9110 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 798 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/798 | |
| node0 | 6m 9.780s | 2025-12-23 14:27:01.290 | 9111 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node3 | 6m 9.785s | 2025-12-23 14:27:01.295 | 9160 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 798 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/798 | |
| node3 | 6m 9.786s | 2025-12-23 14:27:01.296 | 9161 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node1 | 6m 9.821s | 2025-12-23 14:27:01.331 | 9306 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node1 | 6m 9.824s | 2025-12-23 14:27:01.334 | 9307 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 798 Timestamp: 2025-12-23T14:27:00.203738Z Next consensus number: 23759 Legacy running event hash: 1866133710c9b2f30fd78bee681a68d1b18f3d3195ae2a7eb31edbf6f03d17f7a264f07c20c80ae4c772b9c27d7f6fae Legacy running event mnemonic: art-group-bomb-payment Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 908752772 Root hash: d43e9ffcfd511c155e25fe14115142bedfe1f5cd9b2290aec8ea3807cd510d4498b51f6db167b7a821dd4aac9fbb8d08 (root) VirtualMap state / joke-above-field-live {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"diagram-census-veteran-check"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"flag-segment-century-level"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"misery-surface-minute-mimic"}}} | |||||||||
| node1 | 6m 9.833s | 2025-12-23 14:27:01.343 | 9308 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 6m 9.834s | 2025-12-23 14:27:01.344 | 9309 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 771 File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 6m 9.834s | 2025-12-23 14:27:01.344 | 9310 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 6m 9.840s | 2025-12-23 14:27:01.350 | 9311 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 6m 9.840s | 2025-12-23 14:27:01.350 | 9312 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 798 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/798 {"round":798,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/798/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 6m 9.842s | 2025-12-23 14:27:01.352 | 9313 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/119 | |
| node2 | 6m 9.856s | 2025-12-23 14:27:01.366 | 9079 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 798 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/798 | |
| node2 | 6m 9.856s | 2025-12-23 14:27:01.366 | 9080 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node0 | 6m 9.864s | 2025-12-23 14:27:01.374 | 9142 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node3 | 6m 9.865s | 2025-12-23 14:27:01.375 | 9192 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node0 | 6m 9.866s | 2025-12-23 14:27:01.376 | 9143 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 798 Timestamp: 2025-12-23T14:27:00.203738Z Next consensus number: 23759 Legacy running event hash: 1866133710c9b2f30fd78bee681a68d1b18f3d3195ae2a7eb31edbf6f03d17f7a264f07c20c80ae4c772b9c27d7f6fae Legacy running event mnemonic: art-group-bomb-payment Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 908752772 Root hash: d43e9ffcfd511c155e25fe14115142bedfe1f5cd9b2290aec8ea3807cd510d4498b51f6db167b7a821dd4aac9fbb8d08 (root) VirtualMap state / joke-above-field-live {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"diagram-census-veteran-check"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"flag-segment-century-level"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"misery-surface-minute-mimic"}}} | |||||||||
| node3 | 6m 9.866s | 2025-12-23 14:27:01.376 | 9193 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 798 Timestamp: 2025-12-23T14:27:00.203738Z Next consensus number: 23759 Legacy running event hash: 1866133710c9b2f30fd78bee681a68d1b18f3d3195ae2a7eb31edbf6f03d17f7a264f07c20c80ae4c772b9c27d7f6fae Legacy running event mnemonic: art-group-bomb-payment Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 908752772 Root hash: d43e9ffcfd511c155e25fe14115142bedfe1f5cd9b2290aec8ea3807cd510d4498b51f6db167b7a821dd4aac9fbb8d08 (root) VirtualMap state / joke-above-field-live {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"diagram-census-veteran-check"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"flag-segment-century-level"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"misery-surface-minute-mimic"}}} | |||||||||
| node0 | 6m 9.873s | 2025-12-23 14:27:01.383 | 9144 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 6m 9.873s | 2025-12-23 14:27:01.383 | 9145 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 771 File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 6m 9.873s | 2025-12-23 14:27:01.383 | 9146 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 6m 9.873s | 2025-12-23 14:27:01.383 | 9194 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 9.873s | 2025-12-23 14:27:01.383 | 9195 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 771 File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 6m 9.873s | 2025-12-23 14:27:01.383 | 9196 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node0 | 6m 9.879s | 2025-12-23 14:27:01.389 | 9147 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 6m 9.879s | 2025-12-23 14:27:01.389 | 9148 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 798 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/798 {"round":798,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/798/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 6m 9.879s | 2025-12-23 14:27:01.389 | 9197 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 6m 9.879s | 2025-12-23 14:27:01.389 | 9198 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 798 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/798 {"round":798,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/798/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 6m 9.880s | 2025-12-23 14:27:01.390 | 9199 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/119 | |
| node0 | 6m 9.881s | 2025-12-23 14:27:01.391 | 9149 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/119 | |
| node2 | 6m 9.938s | 2025-12-23 14:27:01.448 | 9119 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/43 for 798 | |
| node2 | 6m 9.941s | 2025-12-23 14:27:01.451 | 9120 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 798 Timestamp: 2025-12-23T14:27:00.203738Z Next consensus number: 23759 Legacy running event hash: 1866133710c9b2f30fd78bee681a68d1b18f3d3195ae2a7eb31edbf6f03d17f7a264f07c20c80ae4c772b9c27d7f6fae Legacy running event mnemonic: art-group-bomb-payment Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 908752772 Root hash: d43e9ffcfd511c155e25fe14115142bedfe1f5cd9b2290aec8ea3807cd510d4498b51f6db167b7a821dd4aac9fbb8d08 (root) VirtualMap state / joke-above-field-live {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"diagram-census-veteran-check"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"flag-segment-century-level"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"misery-surface-minute-mimic"}}} | |||||||||
| node2 | 6m 9.947s | 2025-12-23 14:27:01.457 | 9121 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 6m 9.948s | 2025-12-23 14:27:01.458 | 9122 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 771 File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 6m 9.948s | 2025-12-23 14:27:01.458 | 9123 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node2 | 6m 9.953s | 2025-12-23 14:27:01.463 | 9124 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node2 | 6m 9.953s | 2025-12-23 14:27:01.463 | 9125 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 798 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/798 {"round":798,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/798/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 6m 9.955s | 2025-12-23 14:27:01.465 | 9126 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/119 | |
| node4 | 6m 10.364s | 2025-12-23 14:27:01.874 | 215 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Preparing for reconnect, stopping gossip | |
| node4 | 6m 10.364s | 2025-12-23 14:27:01.874 | 216 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] remote ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] | |
| node4 | 6m 10.364s | 2025-12-23 14:27:01.874 | 217 | INFO | RECONNECT | <<platform-core: SyncProtocolWith3 4 to 3>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] remote ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] | |
| node4 | 6m 10.364s | 2025-12-23 14:27:01.874 | 218 | INFO | RECONNECT | <<platform-core: SyncProtocolWith1 4 to 1>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] remote ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] | |
| node4 | 6m 10.365s | 2025-12-23 14:27:01.875 | 219 | INFO | RECONNECT | <<platform-core: SyncProtocolWith0 4 to 0>> | RpcPeerHandler: | SELF_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] remote ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] | |
| node4 | 6m 10.365s | 2025-12-23 14:27:01.875 | 220 | INFO | PLATFORM_STATUS | <platformForkJoinThread-3> | StatusStateMachine: | Platform spent 924.0 ms in OBSERVING. Now in BEHIND | |
| node0 | 6m 10.435s | 2025-12-23 14:27:01.945 | 9172 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 0 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] remote ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] | |
| node1 | 6m 10.435s | 2025-12-23 14:27:01.945 | 9325 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 1 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] remote ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] | |
| node2 | 6m 10.435s | 2025-12-23 14:27:01.945 | 9138 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] remote ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] | |
| node3 | 6m 10.435s | 2025-12-23 14:27:01.945 | 9219 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 3 to 4>> | RpcPeerHandler: | OTHER_FALLEN_BEHIND local ev=EventWindow[latestConsensusRound=799,newEventBirthRound=800,ancientThreshold=772,expiredThreshold=698] remote ev=EventWindow[latestConsensusRound=398,newEventBirthRound=399,ancientThreshold=370,expiredThreshold=354] | |
| node4 | 6m 10.518s | 2025-12-23 14:27:02.028 | 221 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Preparing for reconnect, start clearing queues | |
| node4 | 6m 10.519s | 2025-12-23 14:27:02.029 | 222 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Queues have been cleared | |
| node4 | 6m 10.520s | 2025-12-23 14:27:02.030 | 223 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Waiting for a state to be obtained from a peer | |
| node4 | 6m 10.681s | 2025-12-23 14:27:02.191 | 224 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | ReconnectStatePeerProtocol: | Starting reconnect in role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":398} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node4 | 6m 10.682s | 2025-12-23 14:27:02.192 | 225 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | ReconnectStateLearner: | Receiving signed state signatures | |
| node2 | 6m 10.740s | 2025-12-23 14:27:02.250 | 9157 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectStateTeacher: | Starting reconnect in the role of the sender {"receiving":false,"nodeId":2,"otherNodeId":4,"round":799} [com.swirlds.logging.legacy.payload.ReconnectStartPayload] | |
| node2 | 6m 10.740s | 2025-12-23 14:27:02.250 | 9158 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectStateTeacher: | The following state will be sent to the learner: | |
| Round: 799 Timestamp: 2025-12-23T14:27:00.637024Z Next consensus number: 23782 Legacy running event hash: c37c2f70c655353dc5bfa4ca3c0198f43a62666566cfcffc100343b7e7142efc809c1a52fcf9323fee2c2a2e8034a537 Legacy running event mnemonic: mountain-unlock-cube-police Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2074313208 Root hash: db97e956e20771e04033658e00a8524f6cf70a2ebfe1d26485deb929ddd04963573cd6bafa091c5566db65d0e82aa947 (root) VirtualMap state / minor-stem-sand-despair {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"approve-find-answer-target"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"price-stone-frog-record"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"garlic-match-fringe-thunder"}}} | |||||||||
| node2 | 6m 10.741s | 2025-12-23 14:27:02.251 | 9159 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectStateTeacher: | Sending signatures from nodes 0, 1, 2 (signing weight = 37500000000/50000000000) for state hash db97e956e20771e04033658e00a8524f6cf70a2ebfe1d26485deb929ddd04963573cd6bafa091c5566db65d0e82aa947 | |
| node2 | 6m 10.741s | 2025-12-23 14:27:02.251 | 9160 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectStateTeacher: | Starting synchronization in the role of the sender. | |
| node2 | 6m 10.743s | 2025-12-23 14:27:02.253 | 9161 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | sending tree rooted at null with route [] | |
| node2 | 6m 10.752s | 2025-12-23 14:27:02.262 | 9162 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@7d512af8 start run() | |
| node4 | 6m 10.812s | 2025-12-23 14:27:02.322 | 226 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | ReconnectStateLearner: | Received signatures from nodes 0, 1, 2 | |
| node4 | 6m 11.043s | 2025-12-23 14:27:02.553 | 253 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | learner calls receiveTree() | |
| node4 | 6m 11.043s | 2025-12-23 14:27:02.553 | 254 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | synchronizing tree | |
| node4 | 6m 11.044s | 2025-12-23 14:27:02.554 | 255 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | receiving tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node4 | 6m 11.053s | 2025-12-23 14:27:02.563 | 256 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@1ba811fb start run() | |
| node4 | 6m 11.113s | 2025-12-23 14:27:02.623 | 257 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): firstLeafPath: 4 -> 4, lastLeafPath: 8 -> 8 | |
| node4 | 6m 11.114s | 2025-12-23 14:27:02.624 | 258 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | ReconnectNodeRemover: | setPathInformation(): done | |
| node4 | 6m 11.259s | 2025-12-23 14:27:02.769 | 259 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread finished the learning loop for the current subtree | |
| node4 | 6m 11.259s | 2025-12-23 14:27:02.769 | 260 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call nodeRemover.allNodesReceived() | |
| node4 | 6m 11.260s | 2025-12-23 14:27:02.770 | 261 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived() | |
| node4 | 6m 11.260s | 2025-12-23 14:27:02.770 | 262 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | ReconnectNodeRemover: | allNodesReceived(): done | |
| node4 | 6m 11.261s | 2025-12-23 14:27:02.771 | 263 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | call root.endLearnerReconnect() | |
| node4 | 6m 11.261s | 2025-12-23 14:27:02.771 | 264 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call reconnectIterator.close() | |
| node4 | 6m 11.261s | 2025-12-23 14:27:02.771 | 265 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call setHashPrivate() | |
| node4 | 6m 11.284s | 2025-12-23 14:27:02.794 | 275 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | call postInit() | |
| node4 | 6m 11.285s | 2025-12-23 14:27:02.795 | 277 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | VirtualMap: | endLearnerReconnect() complete | |
| node4 | 6m 11.285s | 2025-12-23 14:27:02.795 | 278 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushVirtualTreeView: | close() complete | |
| node4 | 6m 11.285s | 2025-12-23 14:27:02.795 | 279 | INFO | RECONNECT | <<work group learning-synchronizer: learner-task #2>> | LearnerPushTask: | learner thread closed input, output, and view for the current subtree | |
| node4 | 6m 11.286s | 2025-12-23 14:27:02.796 | 280 | INFO | RECONNECT | <<work group learning-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@1ba811fb finish run() | |
| node4 | 6m 11.287s | 2025-12-23 14:27:02.797 | 281 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | received tree rooted at com.swirlds.virtualmap.VirtualMap with route [] | |
| node4 | 6m 11.288s | 2025-12-23 14:27:02.798 | 282 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | synchronization complete | |
| node4 | 6m 11.288s | 2025-12-23 14:27:02.798 | 283 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | learner calls initialize() | |
| node4 | 6m 11.288s | 2025-12-23 14:27:02.798 | 284 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | initializing tree | |
| node4 | 6m 11.288s | 2025-12-23 14:27:02.798 | 285 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | initialization complete | |
| node4 | 6m 11.289s | 2025-12-23 14:27:02.799 | 286 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | learner calls hash() | |
| node4 | 6m 11.289s | 2025-12-23 14:27:02.799 | 287 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | hashing tree | |
| node4 | 6m 11.289s | 2025-12-23 14:27:02.799 | 288 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | hashing complete | |
| node4 | 6m 11.290s | 2025-12-23 14:27:02.800 | 289 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | learner calls logStatistics() | |
| node4 | 6m 11.293s | 2025-12-23 14:27:02.803 | 290 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | Finished synchronization {"timeInSeconds":0.244,"hashTimeInSeconds":0.0,"initializationTimeInSeconds":0.0,"totalNodes":9,"leafNodes":5,"redundantLeafNodes":2,"internalNodes":4,"redundantInternalNodes":0} [com.swirlds.logging.legacy.payload.SynchronizationCompletePayload] | |
| node4 | 6m 11.293s | 2025-12-23 14:27:02.803 | 291 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | ReconnectMapMetrics: transfersFromTeacher=9; transfersFromLearner=8; internalHashes=3; internalCleanHashes=0; internalData=0; internalCleanData=0; leafHashes=5; leafCleanHashes=2; leafData=5; leafCleanData=2 | |
| node4 | 6m 11.294s | 2025-12-23 14:27:02.804 | 292 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | LearningSynchronizer: | learner is done synchronizing | |
| node4 | 6m 11.295s | 2025-12-23 14:27:02.805 | 293 | INFO | STARTUP | <<platform-core: SyncProtocolWith2 4 to 2>> | ConsistencyTestingToolState: | New State Constructed. | |
| node4 | 6m 11.301s | 2025-12-23 14:27:02.811 | 294 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | ReconnectStateLearner: | Reconnect data usage report {"dataMegabytes":0.005864143371582031} [com.swirlds.logging.legacy.payload.ReconnectDataUsagePayload] | |
| node2 | 6m 11.329s | 2025-12-23 14:27:02.839 | 9185 | INFO | RECONNECT | <<work group teaching-synchronizer: async-input-stream #0>> | AsyncInputStream: | com.swirlds.common.merkle.synchronization.streams.AsyncInputStream@7d512af8 finish run() | |
| node2 | 6m 11.331s | 2025-12-23 14:27:02.841 | 9186 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | TeachingSynchronizer: | finished sending tree | |
| node2 | 6m 11.334s | 2025-12-23 14:27:02.844 | 9189 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectStateTeacher: | Finished synchronization in the role of the sender. | |
| node2 | 6m 11.374s | 2025-12-23 14:27:02.884 | 9190 | INFO | RECONNECT | <<platform-core: SyncProtocolWith4 2 to 4>> | ReconnectStateTeacher: | Finished reconnect in the role of the sender. {"receiving":false,"nodeId":2,"otherNodeId":4,"round":799} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6m 11.408s | 2025-12-23 14:27:02.918 | 295 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | ReconnectStatePeerProtocol: | Finished reconnect in the role of the receiver. {"receiving":true,"nodeId":4,"otherNodeId":2,"round":799} [com.swirlds.logging.legacy.payload.ReconnectFinishPayload] | |
| node4 | 6m 11.410s | 2025-12-23 14:27:02.920 | 296 | INFO | RECONNECT | <<platform-core: SyncProtocolWith2 4 to 2>> | ReconnectStatePeerProtocol: | Information for state received during reconnect: | |
| Round: 799 Timestamp: 2025-12-23T14:27:00.637024Z Next consensus number: 23782 Legacy running event hash: c37c2f70c655353dc5bfa4ca3c0198f43a62666566cfcffc100343b7e7142efc809c1a52fcf9323fee2c2a2e8034a537 Legacy running event mnemonic: mountain-unlock-cube-police Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2074313208 Root hash: db97e956e20771e04033658e00a8524f6cf70a2ebfe1d26485deb929ddd04963573cd6bafa091c5566db65d0e82aa947 (root) VirtualMap state / minor-stem-sand-despair {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"garlic-match-fringe-thunder"}}} | |||||||||
| node4 | 6m 11.411s | 2025-12-23 14:27:02.921 | 297 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | A state was obtained from a peer | |
| node4 | 6m 11.413s | 2025-12-23 14:27:02.923 | 298 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | The state obtained from a peer was validated | |
| node4 | 6m 11.414s | 2025-12-23 14:27:02.924 | 300 | DEBUG | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | `loadState` : reloading state | |
| node4 | 6m 11.415s | 2025-12-23 14:27:02.925 | 301 | INFO | STARTUP | <<platform-core: reconnectController>> | ConsistencyTestingToolState: | State initialized with state long 6319701421212752143. | |
| node4 | 6m 11.415s | 2025-12-23 14:27:02.925 | 302 | INFO | STARTUP | <<platform-core: reconnectController>> | ConsistencyTestingToolState: | State initialized with 799 rounds handled. | |
| node4 | 6m 11.416s | 2025-12-23 14:27:02.926 | 303 | INFO | STARTUP | <<platform-core: reconnectController>> | TransactionHandlingHistory: | Consistency testing tool log path: data/saved/consistency-test/4/ConsistencyTestLog.csv | |
| node4 | 6m 11.416s | 2025-12-23 14:27:02.926 | 304 | INFO | STARTUP | <<platform-core: reconnectController>> | TransactionHandlingHistory: | Log file found. Parsing previous history | |
| node4 | 6m 11.440s | 2025-12-23 14:27:02.950 | 309 | INFO | STATE_TO_DISK | <<platform-core: reconnectController>> | DefaultSavedStateController: | Signed state from round 799 created, will eventually be written to disk, for reason: RECONNECT | |
| node4 | 6m 11.441s | 2025-12-23 14:27:02.951 | 310 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | StatusStateMachine: | Platform spent 1.1 s in BEHIND. Now in RECONNECT_COMPLETE | |
| node4 | 6m 11.442s | 2025-12-23 14:27:02.952 | 311 | INFO | STARTUP | <platformForkJoinThread-6> | Shadowgraph: | Shadowgraph starting from expiration threshold 772 | |
| node4 | 6m 11.446s | 2025-12-23 14:27:02.956 | 314 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 799 state to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/799 | |
| node4 | 6m 11.447s | 2025-12-23 14:27:02.957 | 315 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for 799 | |
| node4 | 6m 11.449s | 2025-12-23 14:27:02.959 | 316 | INFO | EVENT_STREAM | <<platform-core: reconnectController>> | DefaultConsensusEventStream: | EventStreamManager::updateRunningHash: c37c2f70c655353dc5bfa4ca3c0198f43a62666566cfcffc100343b7e7142efc809c1a52fcf9323fee2c2a2e8034a537 | |
| node4 | 6m 11.449s | 2025-12-23 14:27:02.959 | 318 | INFO | STARTUP | <platformForkJoinThread-1> | PcesFileManager: | Due to recent operations on this node, the local preconsensus event stream will have a discontinuity. The last file with the old origin round is 2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr399_orgn0.pces. All future files will have an origin round of 799. | |
| node4 | 6m 11.450s | 2025-12-23 14:27:02.960 | 319 | INFO | RECONNECT | <<platform-core: reconnectController>> | ReconnectController: | Reconnect almost done resuming gossip | |
| node4 | 6m 11.627s | 2025-12-23 14:27:03.137 | 353 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/3 for 799 | |
| node4 | 6m 11.632s | 2025-12-23 14:27:03.142 | 354 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 799 Timestamp: 2025-12-23T14:27:00.637024Z Next consensus number: 23782 Legacy running event hash: c37c2f70c655353dc5bfa4ca3c0198f43a62666566cfcffc100343b7e7142efc809c1a52fcf9323fee2c2a2e8034a537 Legacy running event mnemonic: mountain-unlock-cube-police Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: -2074313208 Root hash: db97e956e20771e04033658e00a8524f6cf70a2ebfe1d26485deb929ddd04963573cd6bafa091c5566db65d0e82aa947 (root) VirtualMap state / minor-stem-sand-despair {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"approve-find-answer-target"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"price-stone-frog-record"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"garlic-match-fringe-thunder"}}} | |||||||||
| node4 | 6m 11.678s | 2025-12-23 14:27:03.188 | 355 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus file on disk. | |
| File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr399_orgn0.pces | |||||||||
| node4 | 6m 11.679s | 2025-12-23 14:27:03.189 | 356 | WARN | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | No preconsensus event files meeting specified criteria found to copy. Lower bound: 772 | |
| node4 | 6m 11.686s | 2025-12-23 14:27:03.196 | 357 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 799 to disk. Reason: RECONNECT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/799 {"round":799,"freezeState":false,"reason":"RECONNECT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/799/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node4 | 6m 11.690s | 2025-12-23 14:27:03.200 | 358 | INFO | PLATFORM_STATUS | <platformForkJoinThread-5> | StatusStateMachine: | Platform spent 248.0 ms in RECONNECT_COMPLETE. Now in CHECKING | |
| node4 | 6m 11.997s | 2025-12-23 14:27:03.507 | 359 | INFO | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Initializing statistics output in CSV format [ csvOutputFolder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats', csvFileName = 'PlatformTesting4.csv' ] | |
| node4 | 6m 12.000s | 2025-12-23 14:27:03.510 | 360 | DEBUG | STARTUP | <<platform-core: MetricsThread #0>> | LegacyCsvWriter: | CsvWriter: Using the existing metrics folder [ folder = '/opt/hgcapp/services-hedera/HapiApp2.0/data/stats' ] | |
| node4 | 6m 12.350s | 2025-12-23 14:27:03.860 | 361 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:2 H:a744295c2781 BR:797), num remaining: 3 | |
| node4 | 6m 12.351s | 2025-12-23 14:27:03.861 | 362 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:3 H:8097ca060990 BR:797), num remaining: 2 | |
| node4 | 6m 12.352s | 2025-12-23 14:27:03.862 | 363 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:0 H:f3c15ffeca0a BR:797), num remaining: 1 | |
| node4 | 6m 12.352s | 2025-12-23 14:27:03.862 | 364 | INFO | STARTUP | <<scheduler ConsensusEngine>> | ConsensusImpl: | Found init judge (CR:1 H:ac45d72c0e84 BR:797), num remaining: 0 | |
| node4 | 6m 16.218s | 2025-12-23 14:27:07.728 | 495 | INFO | PLATFORM_STATUS | <platformForkJoinThread-7> | StatusStateMachine: | Platform spent 4.5 s in CHECKING. Now in ACTIVE | |
| node2 | 7m 9.341s | 2025-12-23 14:28:00.851 | 10606 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 931 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node3 | 7m 9.372s | 2025-12-23 14:28:00.882 | 10636 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 931 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node1 | 7m 9.378s | 2025-12-23 14:28:00.888 | 10762 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 931 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 7m 9.436s | 2025-12-23 14:28:00.946 | 1789 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 931 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node0 | 7m 9.452s | 2025-12-23 14:28:00.962 | 10598 | INFO | STATE_TO_DISK | <<scheduler TransactionHandler>> | DefaultSavedStateController: | Signed state from round 931 created, will eventually be written to disk, for reason: PERIODIC_SNAPSHOT | |
| node4 | 7m 9.585s | 2025-12-23 14:28:01.095 | 1792 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 931 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/931 | |
| node4 | 7m 9.586s | 2025-12-23 14:28:01.096 | 1793 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for 931 | |
| node2 | 7m 9.618s | 2025-12-23 14:28:01.128 | 10609 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 931 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/931 | |
| node2 | 7m 9.618s | 2025-12-23 14:28:01.128 | 10610 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for 931 | |
| node3 | 7m 9.650s | 2025-12-23 14:28:01.160 | 10639 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 931 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/931 | |
| node3 | 7m 9.651s | 2025-12-23 14:28:01.161 | 10640 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 931 | |
| node0 | 7m 9.654s | 2025-12-23 14:28:01.164 | 10601 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 931 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/931 | |
| node0 | 7m 9.655s | 2025-12-23 14:28:01.165 | 10602 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 931 | |
| node1 | 7m 9.664s | 2025-12-23 14:28:01.174 | 10765 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Started writing round 931 state to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/931 | |
| node1 | 7m 9.665s | 2025-12-23 14:28:01.175 | 10766 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Creating a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 931 | |
| node2 | 7m 9.701s | 2025-12-23 14:28:01.211 | 10643 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/52 for 931 | |
| node2 | 7m 9.703s | 2025-12-23 14:28:01.213 | 10644 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 931 Timestamp: 2025-12-23T14:28:00.023418742Z Next consensus number: 28410 Legacy running event hash: bd3eebfa6c5104c23ae8f78aa7603929bd4578b82a1c81184805697776a7855e266ec459a28f87ed31e63a49af2dd6be Legacy running event mnemonic: flame-liar-mechanic-obscure Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 146513275 Root hash: 41322a8744e6f902b92b4c41283fa486db133d1181797b36cccfabf0efc77b28e7e235dae1465baa28bbe6341c3a9135 (root) VirtualMap state / chaos-extend-orphan-thank {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"dutch-bright-guess-own"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"spoon-urban-mention-style"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"struggle-city-total-gospel"}}} | |||||||||
| node2 | 7m 9.710s | 2025-12-23 14:28:01.220 | 10659 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+21+06.515558937Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node2 | 7m 9.710s | 2025-12-23 14:28:01.220 | 10660 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 903 File: data/saved/preconsensus-events/2/2025/12/23/2025-12-23T14+24+52.339344545Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node2 | 7m 9.710s | 2025-12-23 14:28:01.220 | 10661 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 7m 9.715s | 2025-12-23 14:28:01.225 | 1840 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/10 for 931 | |
| node2 | 7m 9.718s | 2025-12-23 14:28:01.228 | 10662 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node4 | 7m 9.718s | 2025-12-23 14:28:01.228 | 1841 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 931 Timestamp: 2025-12-23T14:28:00.023418742Z Next consensus number: 28410 Legacy running event hash: bd3eebfa6c5104c23ae8f78aa7603929bd4578b82a1c81184805697776a7855e266ec459a28f87ed31e63a49af2dd6be Legacy running event mnemonic: flame-liar-mechanic-obscure Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 146513275 Root hash: 41322a8744e6f902b92b4c41283fa486db133d1181797b36cccfabf0efc77b28e7e235dae1465baa28bbe6341c3a9135 (root) VirtualMap state / chaos-extend-orphan-thank {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"dutch-bright-guess-own"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"spoon-urban-mention-style"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"struggle-city-total-gospel"}}} | |||||||||
| node2 | 7m 9.719s | 2025-12-23 14:28:01.229 | 10663 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 931 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/931 {"round":931,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/931/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node2 | 7m 9.720s | 2025-12-23 14:28:01.230 | 10664 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/2/123/249 | |
| node4 | 7m 9.730s | 2025-12-23 14:28:01.240 | 1842 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+21+06.468510091Z_seq0_minr1_maxr399_orgn0.pces Last file: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+27+03.289649484Z_seq1_minr772_maxr1272_orgn799.pces | |||||||||
| node4 | 7m 9.730s | 2025-12-23 14:28:01.240 | 1843 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 903 File: data/saved/preconsensus-events/4/2025/12/23/2025-12-23T14+27+03.289649484Z_seq1_minr772_maxr1272_orgn799.pces | |||||||||
| node3 | 7m 9.731s | 2025-12-23 14:28:01.241 | 10687 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 931 | |
| node4 | 7m 9.731s | 2025-12-23 14:28:01.241 | 1844 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 7m 9.733s | 2025-12-23 14:28:01.243 | 10688 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 931 Timestamp: 2025-12-23T14:28:00.023418742Z Next consensus number: 28410 Legacy running event hash: bd3eebfa6c5104c23ae8f78aa7603929bd4578b82a1c81184805697776a7855e266ec459a28f87ed31e63a49af2dd6be Legacy running event mnemonic: flame-liar-mechanic-obscure Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 146513275 Root hash: 41322a8744e6f902b92b4c41283fa486db133d1181797b36cccfabf0efc77b28e7e235dae1465baa28bbe6341c3a9135 (root) VirtualMap state / chaos-extend-orphan-thank {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"dutch-bright-guess-own"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"spoon-urban-mention-style"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"struggle-city-total-gospel"}}} | |||||||||
| node0 | 7m 9.737s | 2025-12-23 14:28:01.247 | 10649 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 931 | |
| node0 | 7m 9.739s | 2025-12-23 14:28:01.249 | 10650 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 931 Timestamp: 2025-12-23T14:28:00.023418742Z Next consensus number: 28410 Legacy running event hash: bd3eebfa6c5104c23ae8f78aa7603929bd4578b82a1c81184805697776a7855e266ec459a28f87ed31e63a49af2dd6be Legacy running event mnemonic: flame-liar-mechanic-obscure Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 146513275 Root hash: 41322a8744e6f902b92b4c41283fa486db133d1181797b36cccfabf0efc77b28e7e235dae1465baa28bbe6341c3a9135 (root) VirtualMap state / chaos-extend-orphan-thank {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"dutch-bright-guess-own"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"spoon-urban-mention-style"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"struggle-city-total-gospel"}}} | |||||||||
| node4 | 7m 9.739s | 2025-12-23 14:28:01.249 | 1845 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 7m 9.740s | 2025-12-23 14:28:01.250 | 10689 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+21+06.353176078Z_seq0_minr1_maxr501_orgn0.pces Last file: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node3 | 7m 9.740s | 2025-12-23 14:28:01.250 | 10690 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 903 File: data/saved/preconsensus-events/3/2025/12/23/2025-12-23T14+24+52.320371446Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node4 | 7m 9.740s | 2025-12-23 14:28:01.250 | 1846 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 931 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/931 {"round":931,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/931/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 7m 9.742s | 2025-12-23 14:28:01.252 | 10691 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node4 | 7m 9.742s | 2025-12-23 14:28:01.252 | 1847 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/4/123/1 | |
| node0 | 7m 9.746s | 2025-12-23 14:28:01.256 | 10651 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+21+06.555430570Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node0 | 7m 9.746s | 2025-12-23 14:28:01.256 | 10652 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 903 File: data/saved/preconsensus-events/0/2025/12/23/2025-12-23T14+24+52.381129600Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node0 | 7m 9.749s | 2025-12-23 14:28:01.259 | 10653 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node3 | 7m 9.751s | 2025-12-23 14:28:01.261 | 10692 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node3 | 7m 9.751s | 2025-12-23 14:28:01.261 | 10693 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 931 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/931 {"round":931,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/931/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node3 | 7m 9.753s | 2025-12-23 14:28:01.263 | 10694 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/3/123/249 | |
| node1 | 7m 9.756s | 2025-12-23 14:28:01.266 | 10799 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Successfully created a snapshot on demand in /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/swirlds-tmp/50 for 931 | |
| node0 | 7m 9.757s | 2025-12-23 14:28:01.267 | 10654 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node0 | 7m 9.758s | 2025-12-23 14:28:01.268 | 10655 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 931 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/931 {"round":931,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/931/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 7m 9.759s | 2025-12-23 14:28:01.269 | 10800 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Information for state written to disk: | |
| Round: 931 Timestamp: 2025-12-23T14:28:00.023418742Z Next consensus number: 28410 Legacy running event hash: bd3eebfa6c5104c23ae8f78aa7603929bd4578b82a1c81184805697776a7855e266ec459a28f87ed31e63a49af2dd6be Legacy running event mnemonic: flame-liar-mechanic-obscure Rounds non-ancient: 26 Creation version: SemanticVersion[major=1, minor=0, patch=0, pre=, build=] Minimum judge hash code: 146513275 Root hash: 41322a8744e6f902b92b4c41283fa486db133d1181797b36cccfabf0efc77b28e7e235dae1465baa28bbe6341c3a9135 (root) VirtualMap state / chaos-extend-orphan-thank {"Queues (Queue States)":{},"VirtualMapMetadata":{"firstLeafPath":4,"lastLeafPath":8},"Singletons":{"ConsistencyTestingToolService.STATE_LONG":{"path":8,"mnemonic":"dutch-bright-guess-own"},"ConsistencyTestingToolService.ROUND_HANDLED":{"path":6,"mnemonic":"spoon-urban-mention-style"},"RosterService.ROSTER_STATE":{"path":5,"mnemonic":"clump-chimney-cable-explain"},"PlatformStateService.PLATFORM_STATE":{"path":7,"mnemonic":"struggle-city-total-gospel"}}} | |||||||||
| node0 | 7m 9.760s | 2025-12-23 14:28:01.270 | 10656 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/0/123/249 | |
| node1 | 7m 9.767s | 2025-12-23 14:28:01.277 | 10801 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 2 preconsensus files on disk. | |
| First file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces Last file: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+21+06.465098814Z_seq0_minr1_maxr501_orgn0.pces | |||||||||
| node1 | 7m 9.767s | 2025-12-23 14:28:01.277 | 10802 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Found 1 preconsensus event file meeting specified criteria to copy. | |
| Lower bound: 903 File: data/saved/preconsensus-events/1/2025/12/23/2025-12-23T14+24+52.305915291Z_seq1_minr474_maxr5474_orgn0.pces | |||||||||
| node1 | 7m 9.771s | 2025-12-23 14:28:01.281 | 10803 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Copying 1 preconsensus event file(s) | |
| node1 | 7m 9.782s | 2025-12-23 14:28:01.292 | 10818 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | BestEffortPcesFileCopy: | Finished copying 1 preconsensus event file(s) | |
| node1 | 7m 9.783s | 2025-12-23 14:28:01.293 | 10819 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | SignedStateFileWriter: | Finished writing state for round 931 to disk. Reason: PERIODIC_SNAPSHOT, directory: /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/931 {"round":931,"freezeState":false,"reason":"PERIODIC_SNAPSHOT","directory":"file:///opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/931/"} [com.swirlds.logging.legacy.payload.StateSavedToDiskPayload] | |
| node1 | 7m 9.785s | 2025-12-23 14:28:01.295 | 10820 | INFO | STATE_TO_DISK | <<scheduler StateSnapshotManager>> | FileUtils: | deleting directory /opt/hgcapp/services-hedera/HapiApp2.0/data/saved/com.swirlds.demo.consistency.ConsistencyTestingToolMain/1/123/249 | |
| node1 | 8m 4.922s | 2025-12-23 14:28:56.432 | 12167 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 1 to 3>> | NetworkUtils: | Connection broken: 1 -> 3 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.430750810Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node0 | 8m 4.923s | 2025-12-23 14:28:56.433 | 12011 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 0 to 3>> | NetworkUtils: | Connection broken: 0 -> 3 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.430547781Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node2 | 8m 4.925s | 2025-12-23 14:28:56.435 | 12007 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 2 to 3>> | NetworkUtils: | Connection broken: 2 -> 3 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.430388616Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node4 | 8m 4.926s | 2025-12-23 14:28:56.436 | 3194 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith3 4 to 3>> | NetworkUtils: | Connection broken: 4 <- 3 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.433555776Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node1 | 8m 5.392s | 2025-12-23 14:28:56.902 | 12168 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith2 1 to 2>> | NetworkUtils: | Connection broken: 1 -> 2 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.899800206Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node4 | 8m 5.392s | 2025-12-23 14:28:56.902 | 3195 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith2 4 to 2>> | NetworkUtils: | Connection broken: 4 <- 2 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.899647852Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node1 | 8m 5.407s | 2025-12-23 14:28:56.917 | 12169 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith0 1 to 0>> | NetworkUtils: | Connection broken: 1 <- 0 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.917285442Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node4 | 8m 5.407s | 2025-12-23 14:28:56.917 | 3196 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith0 4 to 0>> | NetworkUtils: | Connection broken: 4 <- 0 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:56.917090510Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||
| node4 | 8m 5.938s | 2025-12-23 14:28:57.448 | 3197 | WARN | SOCKET_EXCEPTIONS | <<platform-core: SyncProtocolWith1 4 to 1>> | NetworkUtils: | Connection broken: 4 <- 1 | |
| com.swirlds.common.threading.pool.ParallelExecutionException: Time thrown: 2025-12-23T14:28:57.444840579Z at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:160) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.runProtocol(RpcPeerProtocol.java:293) at com.swirlds.platform.network.communication.states.ProtocolNegotiated.transition(ProtocolNegotiated.java:47) at com.swirlds.platform.network.communication.Negotiator.execute(Negotiator.java:79) at com.swirlds.platform.network.communication.ProtocolNegotiatorThread.run(ProtocolNegotiatorThread.java:65) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.doWork(StoppableThreadImpl.java:599) at com.swirlds.common.threading.framework.internal.StoppableThreadImpl.run(StoppableThreadImpl.java:200) at com.swirlds.common.threading.framework.internal.AbstractThreadConfiguration.lambda$wrapRunnableWithSnapshot$3(AbstractThreadConfiguration.java:654) at java.base/java.lang.Thread.run(Thread.java:1583) Suppressed: java.util.concurrent.ExecutionException: java.net.SocketException: Connection or outbound has closed at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection or outbound has closed at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1297) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.write(AbstractStreamExtension.java:115) at com.swirlds.common.io.extendable.ExtendableOutputStream.write(ExtendableOutputStream.java:64) at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125) at java.base/java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:252) at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:240) at java.base/java.io.DataOutputStream.flush(DataOutputStream.java:131) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.writeMessages(RpcPeerProtocol.java:387) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$2(RpcPeerProtocol.java:297) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more Caused by: java.util.concurrent.ExecutionException: java.net.SocketException: Connection reset at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at com.swirlds.common.threading.pool.CachedPoolParallelExecutor.doParallelWithHandler(CachedPoolParallelExecutor.java:154) ... 8 more Caused by: java.net.SocketException: Connection reset at java.base/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:318) at java.base/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:346) at java.base/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:796) at java.base/java.net.Socket$SocketInputStream.read(Socket.java:1099) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:489) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:483) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1461) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1066) at com.swirlds.common.io.extendable.extensions.AbstractStreamExtension.read(AbstractStreamExtension.java:73) at com.swirlds.common.io.extendable.ExtendableInputStream.read(ExtendableInputStream.java:63) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:291) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:347) at java.base/java.io.BufferedInputStream.implRead(BufferedInputStream.java:420) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:399) at java.base/java.io.DataInputStream.readFully(DataInputStream.java:208) at java.base/java.io.DataInputStream.readShort(DataInputStream.java:319) at org.hiero.base.io.streams.AugmentedDataInputStream.readShort(AugmentedDataInputStream.java:158) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.readMessages(RpcPeerProtocol.java:431) at com.swirlds.platform.network.protocol.rpc.RpcPeerProtocol.lambda$runProtocol$1(RpcPeerProtocol.java:296) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:24) at org.hiero.base.concurrent.ThrowingRunnable.call(ThrowingRunnable.java:9) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ... 2 more | |||||||||