Sign in to follow this  
Followers 0
sufu1

Supercrash

116 posts in this topic

totally epic is probably the best screen name ever(tied with DDML).

i hate you though, because now (for a while) i cannot use the expression without people thinking i am trying to send some subliminal message about you.

That was random

0

Share this post


Link to post
Share on other sites

i hate you though, because now (for a while) i cannot use the expression without people thinking i am trying to send some subliminal message about you.

Yeah... last time I said "I just mouth fucked totally epic", everyone thought I mouth fucked totally epic. Godammit Doug!

0

Share this post


Link to post
Share on other sites

i like your nickname too.... no homo

p.s. you got epic honeycombzzzz mang! lololoololololollo button mash.

0

Share this post


Link to post
Share on other sites

I'd like to know what happened in the above accident, I'm guessing winnie the pooh knocked over the motorcycle which failed to yield when entering the intersection. tigger who was behind Pooh rear-ended him, followed by the clueless postal worked who crashed into the pile. poor dude had no seatbelt and flew straight out :(

0

Share this post


Link to post
Share on other sites
I'd like to know what happened in the above accident, I'm guessing winnie the pooh knocked over the motorcycle which failed to yield when entering the intersection. tigger who was behind Pooh rear-ended him, followed by the clueless postal worked who crashed into the pile. poor dude had no seatbelt and flew straight out :(

somebody get tigger off pooh, fucker rear-ends everyone.

0

Share this post


Link to post
Share on other sites

something fishy going on with the network provider..... not sufu servers...

this was the reason for crash last week and suspect its similar what happened a few hours ago. if anyone can decipher the following excuse given to us then they must be very smart indeed.

++++++++++++++++++++++++++++++++++++++++++++++++++++++

Broadcast storm of undetermined origin caused link flapping which in turn caused HSRP and spanning tree failures. The broadcast storm apparently began in C2 data center, disrupting traffic on key corporate vlans as well as hosted servers. The C2 core router's CPU became overloaded and inter-data center links were non-responsive, causing STP recalculations and HSRP failures. Key corporate infrastructure became inaccessible as multiple routers attempted to take over (or relinquish) gateway IPs as spanning tree calculated switching paths appeared and disappeared. The C2 router shares switching infrastructure with the C3 core and the initial state of the data center interconnections had most traffic passing through the C5 data center. The broadcast storm cascaded through both the primary and backup C5 distribution networks, leaving access switches with no egress. The broadcast storm propagated through the shared switching infrastructure of the C3 data center facility. Both prima! ry and redundant customer colocation access routers were affected and the storm propagated to the customer access switches. As a result, many customer access devices (in the colocation cabinets) were left in a non-functioning state and required a reboot to restore services. Cisco engineers are on site to determine the root cause of the issue. In the interim we have taken the steps to deploy additional equipment and to remove certain HSRP and redundant switch paths to reduce the severity of link flapping in 100% resolution is proven.

0

Share this post


Link to post
Share on other sites

isn't that just solar winds and magnetic scribby scrambly?

0

Share this post


Link to post
Share on other sites

Broadcast storm of undetermined origin caused link flapping which in turn caused HSRP and spanning tree failures. The broadcast storm apparently began in C2 data center, disrupting traffic on key corporate vlans as well as hosted servers. The C2 core router's CPU became overloaded and inter-data center links were non-responsive, causing STP recalculations and HSRP failures. Key corporate infrastructure became inaccessible as multiple routers attempted to take over (or relinquish) gateway IPs as spanning tree calculated switching paths appeared and disappeared. The C2 router shares switching infrastructure with the C3 core and the initial state of the data center interconnections had most traffic passing through the C5 data center. The broadcast storm cascaded through both the primary and backup C5 distribution networks, leaving access switches with no egress. The broadcast storm propagated through the shared switching infrastructure of the C3 data center facility. Both prima! ry and redundant customer colocation access routers were affected and the storm propagated to the customer access switches. As a result, many customer access devices (in the colocation cabinets) were left in a non-functioning state and required a reboot to restore services. Cisco engineers are on site to determine the root cause of the issue. In the interim we have taken the steps to deploy additional equipment and to remove certain HSRP and redundant switch paths to reduce the severity of link flapping in 100% resolution is proven.

someone hacked the gibson

0

Share this post


Link to post
Share on other sites
Broadcast storm of undetermined origin caused link fapping

this explains it all

0

Share this post


Link to post
Share on other sites
Broadcast storm of undetermined origin caused link flapping which in turn caused HSRP and spanning tree failures. The broadcast storm apparently began in C2 data center, disrupting traffic on key corporate vlans as well as hosted servers. The C2 core router's CPU became overloaded and inter-data center links were non-responsive, causing STP recalculations and HSRP failures. Key corporate infrastructure became inaccessible as multiple routers attempted to take over (or relinquish) gateway IPs as spanning tree calculated switching paths appeared and disappeared. The C2 router shares switching infrastructure with the C3 core and the initial state of the data center interconnections had most traffic passing through the C5 data center. The broadcast storm cascaded through both the primary and backup C5 distribution networks, leaving access switches with no egress. The broadcast storm propagated through the shared switching infrastructure of the C3 data center facility. Both prima! ry and redundant customer colocation access routers were affected and the storm propagated to the customer access switches. As a result, many customer access devices (in the colocation cabinets) were left in a non-functioning state and required a reboot to restore services. Cisco engineers are on site to determine the root cause of the issue. In the interim we have taken the steps to deploy additional equipment and to remove certain HSRP and redundant switch paths to reduce the severity of link flapping in 100% resolution is proven.

startrekshock.gif

0

Share this post


Link to post
Share on other sites

Boooooooom

0

Share this post


Link to post
Share on other sites

weeelll... you see we have a big, big, big server in a big, big, big server facility and they are having big, big, big problems meaning we have big, big, big problems.

-> and if anyone knows what this means # df and what nasty log files await when you run that command then you get the picture

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0