tag:status.combahton.net,2005:/historyaurologic GmbH Status - Incident History2024-03-29T05:23:46Zaurologic GmbHtag:status.combahton.net,2005:Incident/202359182024-03-14T15:38:43Z2024-03-14T15:38:43ZOutage - l4-ddos-filter01.ffm5<p><small>Mar <var data-var='date'>14</var>, <var data-var='time'>15:38</var> UTC</small><br><strong>Resolved</strong> - This has been resolved, system was cold reset yesterday evening and we're currently validating the integrity before we take it back online. Zero impact to our customers following redundancy.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>17:32</var> UTC</small><br><strong>Investigating</strong> - We have identified an outage with l4-ddos-filter01.ffm5, located at Equinix FR7. As the machine is no longer reachable by IPMI and Management Ethernet, we will drive there this evening. RETN session in this location has been shut down for the moment to avoid negative capacity implications. Upstream connectivity is provided by our routers at Interxion.</p>tag:status.combahton.net,2005:Incident/201526072024-03-04T01:08:06Z2024-03-04T01:08:06ZPartial outage CEPH cluster<p><small>Mar <var data-var='date'> 4</var>, <var data-var='time'>01:08</var> UTC</small><br><strong>Resolved</strong> - The affected cluster is healthy again, delayed events have been processed meanwhile.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>23:58</var> UTC</small><br><strong>Monitoring</strong> - Following multiple simultaneous hardware failure, we're currently facing a cascading partial outage of the CEPH cluster, hosting parts of the event based functionality and metrics components in our infrastructure. This may have delays on actions being done through our API and connected components as well as the availability of ddos incident metrics. There is no further impact expected, while there is no risk of data-loss for our customers. Recovery is in progress and we expect to end it soon.</p>tag:status.combahton.net,2005:Incident/201372432024-03-01T19:40:21Z2024-03-01T19:40:21ZFFM1 - Power Dip<p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>19:40</var> UTC</small><br><strong>Resolved</strong> - We have experienced a power dip of the public grid at FFM1 (Tornado Datacenter FRA01). UPS and Generator automatically took over the load, no impact has been observed to customers hosted within this facility.</p>tag:status.combahton.net,2005:Incident/192365922024-01-05T00:00:02Z2024-01-05T00:00:02ZDatabase and Kubernetes Cluster - Maintenance<p><small>Jan <var data-var='date'> 5</var>, <var data-var='time'>00:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Dec <var data-var='date'>24</var>, <var data-var='time'>10:35</var> UTC</small><br><strong>Update</strong> - One physical node hosting multiple kubernetes worker nodes as well as database applications, has been migrated already. No noticeable impact, redundancy worked as expected. Another node left until the end of December.</p><p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>00:02</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Scheduled</strong> - During December, we'll be moving our application Database and Kubernetes Cluster to a Geo-Redundant multi-datacenter design, being hosted across three different datacenter sites in Frankfurt am Main. The maintenance is necessary to increase capacity as well as redundancy for our microservices, powering the company stack.<br /><br />We do not expect any impact to our customers.</p>tag:status.combahton.net,2005:Incident/195416012023-12-27T09:53:21Z2023-12-27T09:53:21ZKernel Panic on core01.egh1.nl<p><small>Dec <var data-var='date'>27</var>, <var data-var='time'>09:53</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Dec <var data-var='date'>27</var>, <var data-var='time'>09:53</var> UTC</small><br><strong>Investigating</strong> - We experienced a simultaneous Kernel Panic of both Virtual Chassis members of the router core01.egh1.nl, which led to a hard cut of BGP-Sessions on that router, as well as a connectivity issue for EGH1 of about two minutes. Traffic traversing through this router might have been also impacted for a short time. Operation within this region has been automatically restored.<br /><br />The incident is being investigated in depth.</p>tag:status.combahton.net,2005:Incident/192366232023-12-15T19:00:06Z2023-12-15T19:00:06ZApplication Logging and Metrics Cluster - Upgrade<p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>09:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>19:34</var> UTC</small><br><strong>Update</strong> - We will be undergoing scheduled maintenance during this time.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>19:33</var> UTC</small><br><strong>Scheduled</strong> - On 15th December, we'll be conducting a upgrade of the application logging and metrics cluster, so called applog. Capacity will be increased by a seperate cluster consisting of physical nodes. Customers might experience slowness when querying sflow samples, ddos-incidents or ddos-metrics.<br /><br />The maintenance is necessary to enhance capacity for the release of planned ddos-protection machine learning features.</p>tag:status.combahton.net,2005:Incident/193856592023-12-10T13:10:57Z2023-12-10T13:10:57ZMaintenance - RETN (AS9002) Upstream<p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>13:10</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Dec <var data-var='date'> 9</var>, <var data-var='time'>13:12</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Dec <var data-var='date'> 9</var>, <var data-var='time'>13:07</var> UTC</small><br><strong>Scheduled</strong> - Earlier on, we had a maintenance communicated in https://status.combahton.net/incidents/wn8pc021yvy3. This was delayed, as RETN ran out of public ip-address-space at Equinix FR5, we overlooked to update the maintenance post accordingly.<br /><br />Following yesterdays incident with link flaps as described in https://status.combahton.net/incidents/c28y1q18ftk0, we decided to go with internal ip-address-space. RETN therefore had our ports provisioned yesterday, so we already drained connectivity, to move those ports to Equinix FR7 (FFM5) instead.<br /><br />Fortunately that worked quite smoothly, beside the fact that our new crossconnects at Equinix FR7, dont provide any signal. RETN already checked and couldnt find any issue on their side. This is currently a trouble ticket with Equinix, to get our new ports up. So to speak, there is no impact to our customers or capacity, as operation is still running over the other carriers. That said, we saw no immediate need for communication.<br /><br />Once Equinix resolved the issues on the new crossconnects, we'll take RETN at Equinix in Operation. We'll update this maintenance ticket when convenient.</p>tag:status.combahton.net,2005:Incident/193764862023-12-08T09:30:00Z2023-12-08T13:34:25ZInterconnection FFM3 (Interxion FRA16) - FFM5 (Equinix FR7) - Link flaps<p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>09:30</var> UTC</small><br><strong>Resolved</strong> - We're currently experiencing link flaps on our interconnection between FFM3 and FFM5 every ~40 minutes. The darkfiber operator is aware of the situation and currently looking into the issue, as other customers (according to the provider) with darkfiber also seem to have reported an issue. We do assume a sleeve might not be fully closed, causing water entry to cause signal oscillations somewhere in Frankfurt.<br /><br />Following our network architecture, we do not observe any impact to our customers.</p>tag:status.combahton.net,2005:Incident/192365482023-12-05T18:00:26Z2023-12-05T18:00:26ZDarkfiber Maintenance - FFM1, Tornado Datacenter<p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>12:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>19:21</var> UTC</small><br><strong>Scheduled</strong> - On 05th December, during the day, we will move one of the darkfiber pairs from Interxion FRA16 to Equinix FR7. To do so, we have already implemented a third diverse connection, which is going to be provided over another route, running right nearby A5 over Frankfurt Airport.<br /><br />As this is a solely new route, we do not expect any impact to our customers. We will shift some traffic during the works, which is transparent. The new route is necessary to build-up our metroring across major Frankfurt datacenter sites. Respectively there will be a ring between Equinix - Tornado Datacenter - Interxion, while Equinix and Interxion already being interconnected with 100Gbps interfaces.</p>tag:status.combahton.net,2005:Incident/192365652023-12-05T18:00:03Z2023-12-05T18:00:03ZRETN Upstream - Upgrade Maintenance<p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Dec <var data-var='date'> 5</var>, <var data-var='time'>12:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Nov <var data-var='date'>28</var>, <var data-var='time'>19:25</var> UTC</small><br><strong>Scheduled</strong> - On 05th December, during the day, we will move upstream ports with RETN (AS9002) from Interxion FRA16 to Equinix FR7. We do not expect any impact to our customers, as connectivity will be still provided by another three upstreams.<br /><br />The maintenance is necessary to double capacity with RETN in Frankfurt, while we will take DDoS-Protection at Equinix FR7 with initially 400Gbps live, further increasing overall capacity.</p>tag:status.combahton.net,2005:Incident/192768542023-12-02T13:04:26Z2023-12-02T13:04:26ZRPD Failover - core01.egh1<p><small>Dec <var data-var='date'> 2</var>, <var data-var='time'>13:04</var> UTC</small><br><strong>Resolved</strong> - We have been hit by a software bug, which caused a segfault of the rpd (route processing daemon, which handles bgp) within Juniper JunOS on the router core01.egh1, resulting in a flap of bgp-sessions for a 30 second period at around 13:42 CET. The process was automatically restarted by the control plane of the router.<br /><br />Following quick analysis of the generated coredump, we know the exact cause and will patch the firmware on this equipment shortly.</p>tag:status.combahton.net,2005:Incident/192509052023-11-30T18:05:44Z2023-11-30T18:05:44ZPacketloss - Skylink Datacenter (EGH1)<p><small>Nov <var data-var='date'>30</var>, <var data-var='time'>18:05</var> UTC</small><br><strong>Resolved</strong> - The incident appears to be resolved since 17:40 CET, following traffic priorization measures.</p><p><small>Nov <var data-var='date'>30</var>, <var data-var='time'>16:13</var> UTC</small><br><strong>Monitoring</strong> - At the moment, it looks better from our monitoring. We dont have an ETTR yet, however we see less link-load on our end in Frankfurt. Awaiting final confirmation.</p><p><small>Nov <var data-var='date'>30</var>, <var data-var='time'>15:34</var> UTC</small><br><strong>Investigating</strong> - We're currently experiencing packetloss with Skylink Datacenter. The datacenter operator informed us about network outage on their end. We are waiting for a ETTR.</p>tag:status.combahton.net,2005:Incident/191432032023-11-16T23:26:19Z2023-11-16T23:26:19ZOutage - Upstream Elisa (AS6667)<p><small>Nov <var data-var='date'>16</var>, <var data-var='time'>23:26</var> UTC</small><br><strong>Resolved</strong> - The issue appears to be resolved, sessions are up again. Current preference for traffic originating in or to Helsinki is over Frankfurt, we'll move it back once we know more.</p><p><small>Nov <var data-var='date'>16</var>, <var data-var='time'>23:16</var> UTC</small><br><strong>Monitoring</strong> - We have been made aware about an outage of the Elisa Upstream Carrier at our Helsinki, Finland location, leading to brief packetloss for a couple of minutes. Traffic has been rerouted to our link to Frankfurt.</p>tag:status.combahton.net,2005:Incident/188536582023-11-11T14:35:53Z2023-11-11T14:35:53ZDarkfiber Maintenance - FFM1<p><small>Nov <var data-var='date'>11</var>, <var data-var='time'>14:35</var> UTC</small><br><strong>Completed</strong> - This is completed according to euNetworks. It appears the short blip on the current primary fiber pair, earlier this morning, was caused by an technical failure.</p><p><small>Nov <var data-var='date'>11</var>, <var data-var='time'>07:34</var> UTC</small><br><strong>Update</strong> - We had a downtime of approximately 10 minutes on one the darkfibers which was maintenanced on 10th November, even though it was multiple times clarified that there will be no downtime. Currently in the call with the euNetworks NOC and driving to the location the field team is carrying out works.</p><p><small>Nov <var data-var='date'>11</var>, <var data-var='time'>07:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Nov <var data-var='date'>11</var>, <var data-var='time'>04:22</var> UTC</small><br><strong>Update</strong> - Following yesterdays maintenance of the first darkfiber pair, which went smooth and was finished by our supplier after just 45 minutes, we are confident the second pair is also going to be done as smooth as the first one.<br /><br />Routing changes just have been pushed, to prefer the non maintenanced pair.</p><p><small>Oct <var data-var='date'>23</var>, <var data-var='time'>16:07</var> UTC</small><br><strong>Update</strong> - There will be a seperate maintenance to our first darkfiber connection on 10th November starting as of 08:00 am and taking a maximum time of 4 hours. Connectivity will be provided by the second darkfiber pair, which is scheduled for maintenance on 11th November as initially communicated. The maintenance is necessary, as euNetworks is upgrading their diverse cables from 72 to 462 strand fibers. We are in close communication with our account manager and planning team, to ensure zero impact for our customers. During the maintenance we will be onsite to ensure short reaction times in every event.</p><p><small>Oct <var data-var='date'>19</var>, <var data-var='time'>13:22</var> UTC</small><br><strong>Scheduled</strong> - On 11/11/2023 between 08:00 and 18:00, the darkfiber supplier will perform scheduled maintenance on a dark fiber link between Tornado Datacenter (FFM1) and Interxion (FFM3/FRA15). We would like to point out that no outage is expected during this time. Transport traffic will continue to be handled smoothly via VxLan, and all other services will be maintained via the second fiber connection.<br /><br />We thank you for your understanding and will be happy to answer any questions you may have.</p>tag:status.combahton.net,2005:Incident/189010852023-11-05T04:00:07Z2023-11-05T04:00:07ZMaintenance - core01.ffm1.de<p><small>Nov <var data-var='date'> 5</var>, <var data-var='time'>04:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Nov <var data-var='date'> 5</var>, <var data-var='time'>00:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Oct <var data-var='date'>23</var>, <var data-var='time'>16:01</var> UTC</small><br><strong>Scheduled</strong> - We'll be upgrading core01.ffm1.de (Juniper QFX) to Arista 100G Chassis, therefore we have to carry out a maintenance lasting no longer than 30 minutes of downtime each affected customer in the given period. The maintenance only affects customers at our FFM1 (Tornado Datacenter, Langen, Germany) Points of Presence, these accounts will be seperately informed by email one week in advance.</p>tag:status.combahton.net,2005:Incident/190044162023-11-03T00:54:59Z2023-11-03T00:54:59ZOutage - RETN (AS9002) Upstream<p><small>Nov <var data-var='date'> 3</var>, <var data-var='time'>00:54</var> UTC</small><br><strong>Resolved</strong> - RETN has officially communicated the end of the non-announced maintenance. Sessions are re-enabled, closing this now.</p><p><small>Nov <var data-var='date'> 2</var>, <var data-var='time'>21:56</var> UTC</small><br><strong>Monitoring</strong> - We're awaiting official maintenance end notification, in order to turn on our BGP Sessions again. Traffic flow isnt impacted, as other carriers took the load.</p><p><small>Nov <var data-var='date'> 2</var>, <var data-var='time'>21:51</var> UTC</small><br><strong>Investigating</strong> - RETN currently is carrying out a maintenance on their network in Frankfurt, where they forget to inform us. The email statement reads as the following: <br /><br />"Dear Customer, Ticket FI-xxxx has been opened.<br />Failure Notification<br />Description:<br />Scheduled maintenance has started on our network, which you were not notified about due to human error. The time of completion of the work is 03:00 UTC+2. The expected interruption time is two hours. We apologise for any inconvenience caused.<br /><br />Start Time: 02.11.2023 21:16 GMT<br /><br />Location: Germany, Frankfurt am Main"<br /><br />From our perspective, this is unacceptable, as we experienced a slight period of packetloss. We have shut down our BGP-Sessions with RETN until the unannounced maintenance is over.</p>tag:status.combahton.net,2005:Incident/189379122023-10-27T00:00:40Z2023-10-27T00:00:40ZOutage - CDN77 Upstream<p><small>Oct <var data-var='date'>27</var>, <var data-var='time'>00:00</var> UTC</small><br><strong>Resolved</strong> - The issue has been resolved at around 22:30 (CEST), we have re-enabled our bgp-sessions.</p><p><small>Oct <var data-var='date'>26</var>, <var data-var='time'>19:50</var> UTC</small><br><strong>Update</strong> - Eight BGP-Sessions with CDN77 are down at the moment, we're disabling all of them to avoid any impact.</p><p><small>Oct <var data-var='date'>26</var>, <var data-var='time'>19:18</var> UTC</small><br><strong>Investigating</strong> - At the moment we have four BGP-Sessions on one of two CDN77 Router in Frankfurt down. They are reporting a ongoing DDoS Attack, causing trouble, obviously control-plane of their routers being targeted - attacks we saw on our network earlier this year.<br /><br />Connectivity appears to be unimpacted according to traffic levels on those ports, meaning we dont see any necessity to react currently.</p>tag:status.combahton.net,2005:Incident/189253262023-10-25T19:50:56Z2023-10-25T19:50:56ZOutage - CDN77 Upstream<p><small>Oct <var data-var='date'>25</var>, <var data-var='time'>19:50</var> UTC</small><br><strong>Resolved</strong> - CDN77 has resolved the issue since around ~19:00 (CEST). Therefore we have re-enabled our sessions, closing this now.</p><p><small>Oct <var data-var='date'>25</var>, <var data-var='time'>16:09</var> UTC</small><br><strong>Investigating</strong> - We have been made aware of occurring bgp session flaps since ~17:50 (CEST) with CDN77 (AS60068). Fault has been confirmed with their network team, we have already shut down our bgp sessions with them to avoid any impact to our customers.</p>tag:status.combahton.net,2005:Incident/188780742023-10-21T15:57:38Z2023-10-21T15:57:38ZFFM2 POP Outage<p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>15:57</var> UTC</small><br><strong>Resolved</strong> - We were able to finally resolve the issue. Link flaps were caused by the PFE (packet forwarding engine), which got overloaded due to a customer at Interwerk causing excessive resource usage on ethernet level through abusive behavior. We have deployed protective measures on the relevant equipment.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>13:06</var> UTC</small><br><strong>Monitoring</strong> - The datacenter operator replied, they can not identify any issue on their side. Our best guess is a physical failure at Interxion therefore, potentially one of the campus site connections between different Meet-Me-Rooms went down solely. We are still investigating and clarifying the said case.</p><p><small>Oct <var data-var='date'>21</var>, <var data-var='time'>12:57</var> UTC</small><br><strong>Investigating</strong> - We have been receiving reports from our monitoring about a outage at FFM2 (Interwerk). Both interconnection links to Interxion went down at around 02:44 pm (CEST), while they were up again after around 4 minutes of downtime. Network at this POP is stable again, datacenter operator has been contacted to clarify what happened.</p>tag:status.combahton.net,2005:Incident/184788092023-09-27T20:00:08Z2023-09-27T20:00:08ZMaintenance - POP Migration FFM5<p><small>Sep <var data-var='date'>27</var>, <var data-var='time'>20:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Sep <var data-var='date'>27</var>, <var data-var='time'>14:01</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Sep <var data-var='date'>12</var>, <var data-var='time'>10:52</var> UTC</small><br><strong>Scheduled</strong> - We'll be carrying out a migration of our points of presence at Equinix FR5 (FFM5) to Equinix FR7 (new FFM5), affecting mainly Layer2 Transport services. Customers with Layer2 Transport at Equinix FR5 (FFM5) will be affected for a total downtime of up to 6 hours.<br /><br />Related crossconnects at Equinix FR5 will be moved over to FR7, while customers with Layer2 Transport will be migrated on our brand new EVPN Fabric, allowing for better redundancy on physical layer (darkfiber) and flexibility up to multiple Terabit. Once the migration is finished, FFM5 will be interconnected with our points of presence at FFM1 (Tornado Datacenter) as well as FFM3 (Interxion Frankfurt FRA16) redundantly as a n*100Gbit ring throughout the Frankfurt am Main metro region.</p>tag:status.combahton.net,2005:Incident/182435032023-08-24T19:34:33Z2023-08-24T19:34:33ZInterconnection Outage - FFM1<p><small>Aug <var data-var='date'>24</var>, <var data-var='time'>19:34</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Aug <var data-var='date'>24</var>, <var data-var='time'>13:19</var> UTC</small><br><strong>Update</strong> - We are continuing to monitor for any further issues.</p><p><small>Aug <var data-var='date'>24</var>, <var data-var='time'>13:19</var> UTC</small><br><strong>Monitoring</strong> - Affected services have been completely restored. We had a few switches in HP Bladecenters, which didnt boot, causing an outage of our cloud environment. These will be replaced over the next few weeks by Passthrough ones in combination with Arista equipment.</p><p><small>Aug <var data-var='date'>24</var>, <var data-var='time'>10:31</var> UTC</small><br><strong>Update</strong> - We are aware, that the cause of the recent issue was a blackout affecting almost all of the southern part of Frankfurt/Offenbach due to an accident on the high voltage network of Netzdienste Rhein Main. In our case, it caused an issue on a single power feed, supplying one phase of 20 racks, which was in transfer bypass mode for temporary maintenance reasons, leading to tripped rack fuses up on load transfer due to the starting current of some servers. The cause and potential mitigations will be further clarified once we have restored remaining services, thank you for your patience.</p><p><small>Aug <var data-var='date'>24</var>, <var data-var='time'>09:46</var> UTC</small><br><strong>Update</strong> - Services are back online, we're checking all relevant systems.</p><p><small>Aug <var data-var='date'>24</var>, <var data-var='time'>09:07</var> UTC</small><br><strong>Identified</strong> - Our technican is on it's way and we're awaiting restore.</p><p><small>Aug <var data-var='date'>24</var>, <var data-var='time'>08:42</var> UTC</small><br><strong>Investigating</strong> - We're currently facing a network outage at our FFM1 Location. Our team is actively investigating the root cause to restore normal operations as quickly as possible. We will provide regular updates as we gather more information and make progress towards a solution.<br /><br />We understand the importance of our services to your operations and apologize for any inconvenience caused. We greatly appreciate your patience and will keep you informed as we work towards a resolution.</p>tag:status.combahton.net,2005:Incident/181946592023-08-19T20:37:38Z2023-08-19T20:37:38ZDelayed processing of antiddos rerouting calls<p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>20:37</var> UTC</small><br><strong>Resolved</strong> - The issue has been resolved by implementing necessary code architecture to move antiddos routing to an event-based architecture, as we utilize it as a standard across our application landscape nowadays.</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>13:39</var> UTC</small><br><strong>Investigating</strong> - We are aware of delayed processing of antiddos rerouting api calls, which can cause ip-addresses under permanent mitigation to get stuck in their l4_permanent enabled state. The said effect is caused by scaling issues with the underlying scheduling service. Our software architects are working on a possible resolution by moving the said component to a event-based design.</p>tag:status.combahton.net,2005:Incident/181052992023-08-14T06:00:07Z2023-08-14T06:00:07ZScheduled Network Maintenance - Eygelshoven, NL (EGH1)<p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>06:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>04:01</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Aug <var data-var='date'>10</var>, <var data-var='time'>12:57</var> UTC</small><br><strong>Scheduled</strong> - We would like to inform you of an upcoming maintenance that will impact our services. The Datacenter Operator of our Eygelshoven POP, SkyLink Data Center BV, will be performing essential network upgrades to enhance performance and accommodate increased demand.<br /><br />During the maintenance window on Monday, August 14th, between 06:00 AM and 08:00 AM CEST, there will be an expected service interruption of approximately 30 minutes affecting our services. This means that there may be a temporary loss of connectivity during this time frame. This interruption is necessary as SkyLink Data Center will upgrade the firmware of their core network, implement redundant routers, and make significant routing changes.<br /><br />After the maintenance, we will closely monitor the situation to ensure smooth service restoration. If connectivity issues persist beyond the maintenance window, contact us via our customer portal at https://my.aurologic.com<br /><br />Please note that customers directly affected by this maintenance will receive additional information via email. If you do not receive an email regarding this maintenance, your services should not be significantly impacted.</p>tag:status.combahton.net,2005:Incident/172299002023-05-27T04:00:09Z2023-05-27T04:00:09ZMigration - Interxion Frankfurt -> Tornado Datacenter FFM1<p><small>May <var data-var='date'>27</var>, <var data-var='time'>04:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>May <var data-var='date'>26</var>, <var data-var='time'>22:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'>12</var>, <var data-var='time'>15:58</var> UTC</small><br><strong>Scheduled</strong> - We'll be carrying out a migration of parts of the dedicated servers hosted at Interxion Frankfurt to our new datacenter Tornado Datacenter FFM1. The migration will start at 22:00 UTC (00:00 CEST, 26.05.2023), which will involve a maximum downtime of about 2-6 hours for each server we move.<br /><br />Nevertheless, we try to keep the downtime as low as possible. The new datacenter is located about 20km away from Interxion Frankfurt, which means most of the time is spent with unracking/racking and cabling servers. IP-Addresses wont change, connectivity will be transparently provided through our darkfibers between both datacenters.<br /><br />Customers affected by this migration will be informed one week in advance.</p>tag:status.combahton.net,2005:Incident/164000342023-04-06T05:00:11Z2023-04-06T05:00:11ZMaintenance Interconnection HEL1 - FFM3<p><small>Apr <var data-var='date'> 6</var>, <var data-var='time'>05:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>22:01</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>10:56</var> UTC</small><br><strong>Scheduled</strong> - In the above mentioned period, maintenance will be performed at the backbone connection between Frankfurt and Helsinki. This may lead to brief latency issues. We apologize for any inconvenience. Thank you for your understanding.</p>