The depths — Chaotic Bliss


Here at the bottom there’s a strange sort of air; the walls are all blurred and the corners aren’t there, here in the depths a man lay unwoken, we know this mans name though it cannot be spoken, as I approached him he made cryptic motions, with hands that looked like they’ve never seen lotion, […]

Second post from one of our other sites

Chaotic bliss – an episodic poetry and philosophy site

via The depths — Chaotic Bliss


Why the fuck

Why the fuck do you listen to anyone older than you? They never seem to be right. Always steering you in the wrong direction. I need something. Something that I can’t wait for, do not tell me to wait. I have shit to do and places to be. No time for silly games and petty bullshit. I can’t get my shit together, till your shits together. But because I’m younger I’m always wrong. Fuck you.

Bank Of America Gets Patent For System To Secure Crypto Storage — UseTheBitcoin


The second largest bank in America has won a patent that will secure cryptocurrency storage through a ‘tamper responsive’ remote storage of private keys. The patent adds to the growing list of… Continue reading “Bank Of America Gets Patent For System To Secure Crypto Storage” The post Bank Of America Gets Patent For System To Secure Crypto…

via Bank Of America Gets Patent For System To Secure Crypto Storage — UseTheBitcoin

AMD Announces 7nm EPYC CPUs and Radeon Instinct GPUs; Intel 10nm Still Nowhere in Sight

AMD’s 7nm Vega 20 GPU die. Image credit: Anandtech

On November 6th, AMD held an event dubbed “Next Horizon,” during which they formally announced the next generation of EPYC “Rome” high-performance CPUs and Radeon Instinct machine learning/AI GPUs for the data center. These chips are manufactured on TSMC’s bleeding edge 7 nanometer fabrication process, said to deliver 2x the density and a 50% reduction in power consumption versus the currently used 14nm LPP node from GlobalFoundries. The day prior, Intel made somewhat of an attempt to upstage AMD, announcing its Cascade Lake-AP server CPUs, still manufactured on 14nm. Based on the specifications of the upcoming chips, AMD appears poised to take significant share from Intel in the data center market next year.

First, let’s dive into the known specifications of the EPYC chips:

CPU AMD EPYC “Rome” Intel Xeon “Cascade Lake-AP” AMD EPYC 7601 (“Naples”) Intel Xeon Platinum 8180M “Skylake-SP”
Node/uArch 7nm Zen 2 14nm++ Cascade Lake 14nm Zen 14nm+ Skylake-SP
Cores/Threads 64/128 48/96 32/64 28/56
Clock Speed
1.8GHz (ES) / ? ? 2.2GHz / 3.2GHz 2.5GHz / 3.8GHz
L2 Cache 32MB 24MB 16MB 28MB
L3 Cache 128MB 66MB? 64MB 38.5MB
L4 Cache Maybe?
Memory Controller Octa-Channel DDR4 (3200?)
Up to 4TB per socket
Dodeca-Channel DDR4-2667
Up to 3TB per socket
Octa-Channel DDR4-2667
Up to 2TB per socket
Hexa-Channel DDR4-2667
Up to 1.5TB per socket
I/O 128x PCI-E 4.0 96x PCI-E 3.0 128x PCI-E 3.0 48x PCI-E 3.0
Socket Socket SP3 (LGA 4094) (<2P) BGA 5908 (<2P) Socket SP3 (LGA 4094) (<2P) LGA 3647 (<8P)
Price $$$$ $$$$$$ $4200 $13000

First we immediately take note of the sheer size of this monstrosity: 64 cores, 128 threads, 160MB combined L2 + L3 cache. AMD achieved this in part by moving to an even more modular architecture than that used in the original EPYC, coupled with a new version of Infinity Fabric to reduce latency and increase bandwidth.

Image credit: Tom’s Hardware

The new EPYC incorporates 8 tiny CPU dies, each containing a complex of 8 cores, tied together through a massive I/O die which is still manufactured on the 14nm node. Things like I/O controllers don’t shrink down to smaller nodes very well, and the benefits of doing so are negligible. This allows AMD to significantly improve yields and cost of production, as well as improve greatly upon the issues caused by non-uniform memory access, high latencies between dies and multiple hops for data through Infinity Fabric.

The I/O die contains the PCI-E controller, memory controller, and likely an L4 cache (although this remains unconfirmed). This eliminates NUMA and non-uniform memory latency, ensuring that only one hop to the I/O die is necessary and allowing the chip to actually behave like a single-socket part. The L4 cache if implemented would be fully inclusive of the L3 (which is already inclusive of the L2), meaning that any data needed to be pulled from another die’s cache would already be present in the I/O die, improving dramatically on “Naples”‘ wildly varying cache latency.

“Rome” also is the first x86 CPU to implement the PCI-E 4.0 specification, doubling bandwidth for peripherals like graphics cards to 64GB/s bidirectional. It also boasts new Infinity Fabric Links, offering 200GB/s of bidirectional bandwidth to compatible Radeon Pro/Instinct GPUs as well as between CPUs in a dual-socket configuration. This puts it miles ahead of Intel’s current Xeon offerings, which only output 48 PCI-E 3.0 lanes per socket. However, unlike with EPYC where dual-socket configurations use half of each CPU’s PCI-E lanes for inter-socket communications (and thus doesn’t increase the total lane count), Intel’s PCI-E lane count is unaffected in multi-socket configurations, thus a system with two Xeons supports up to 96 PCI-E 3.0 lanes. This still falls far short of EPYC with less than half the total I/O bandwidth.

AMD didn’t reveal clock speeds or exact performance uplift, but a footnote in their press release suggested a 29% IPC improvement over “Naples”. Even if this is a best-case scenario and typical workloads only see half the improvement, this is nonetheless very impressive. They discussed numerous architectural improvements, such as an improved front-end and branch predictor, lower latencies and increasing the FPU width to 256 bits. This means that they’re tackling the key weaknesses of their previous lineup versus Intel’s CPUs, which mainly include workloads that are latency-sensitive or utilize 256-bit AVX.

At the event, they demonstrated one 64-core “Rome” CPU being benchmarked in C-Ray against two 28-core Xeon Platinum 8180M CPUs (the top of the line from Intel, costing $13000 each) in a dual-socket config. The EPYC machine finished the benchmark 7% quicker than the Xeons. Furthermore, AMD hinted that power consumption would stay the same with “Rome” (180W TDP), whereas the two Xeons have a combined TDP of 410W and also require a chipset which consumes about 20W. If this is even remotely indicative of typical performance, this chip will put Intel in the toughest competitive position it’s been in since 2005.

Intel’s response, which will probably be released at least a quarter or two later than “Rome” if their recent antics are anything to go by, is a 48-core Cascade Lake-AP CPU. While last year, Intel famously berated AMD for using 4 “glued together” dies in “Naples,” this CPU will utilize two of Intel’s 28-core dies on an MCM package, migrated to the more refined 14nm++ node. Note that 4 cores have been disabled on each die, likely due to (what will anyway be) excessive heat and power consumption. As each die supports 6 memory channels, the CPU will support 12 channels of DDR4 memory. No other concrete information was announced by Intel, leading this author to think the product is nowhere near being launched, similar to their 28-core desktop CPU announced 5 months ago which is still nowhere in sight.

Regardless of when it comes out, Intel’s CPU will likely have a hard time competing. It’s still manufactured on 14nm, using two gigantic 698mm^2 dies (“Rome” uses dies in the 70mm^2 range), and all expectations are set on a TDP of 300W or higher compared to 180W for EPYC. Moreover, based on available data, it looks like “Rome” will have equal, if not measurably better performance per clock and per core compared to Intel’s aging “-Lake” architecture. With nearly double the power consumption, 3/4 the cores, a way higher price tag, and less than half the I/O bandwidth, it’s hard to see the appeal of this part compared to EPYC. The main thing Intel has going for it right now is its reputation — it’s thoroughly established and entrenched in the data center, with a reliable track record and countless existing contracts, whereas AMD was absent from this market for the past few years. “Rome” might just be enough to make many large customers change platforms, though.

AMD made no announcements regarding desktop parts, but rest assured that Zen 2-based chips will be coming to Socket AM4 in 2019. Beyond that is the realm of rumor and speculation.

The Radeon Instinct chips are a lot less exciting, so I’m not going to talk about them in as much depth. There are two models: the MI60, featuring 64 compute units and a TDP of 300W; and the MI50, featuring 60 compute units and a TDP of 150W. The chip used is a die shrink of the current 14nm “Vega 10” silicon to 7nm, dubbed “Vega 20”, with enhancements such as dedicated INT4/INT8 hardware delivering 59/118TOPS respectively and a bump to a 1:2 FP64:FP32 ratio from Vega 10’s 1:16. Otherwise, the architecture is the same. The core config of 4096 SP, 256 TMU, and 64 ROP is retained, while the number of HBM stacks is doubled (running at 2000MHz effective) for a total of 16GB/32GB (MI50/MI60) and 1TB/s bandwidth. These cards are marketed as being for AI and machine learning, but they are arguably even better suited for scientific and HPC workloads thanks to the FP64 units. In this area they will compete with NVIDIA’s Tesla V100 PCI-E card, which offers similar theoretical performance on the 12nm node. For machine learning, NVIDIA’s Tesla T4 (also 12nm) should offer superior performance at a fraction of the power draw (75W vs 150W/300W for Radeon Instinct). These cards are not being marketed to gamers and for gaming workloads we would not expect them to exceed 1080 Ti performance. We are eagerly awaiting AMD’s “next gen” graphics architecture, coming ~2020-2021, as GCN (which is approaching its 8th birthday) is simply not exciting anymore and does not compete strongly. However, AMD will likely be competing on price.

But back to the exciting stuff, the CPUs. Is it over? Is Intel finished? How will 7nm Ryzen materialize? Let us know what you think in the comments.



1k view milestone!!!

Shoutout to everyone who took time out of their arguably important lives to check out our content and all the countries we get traffic from

We love y’all for fuckin with us

Stay tuned comrades

-TheGreatestNever team

Robinhood vs. Stash Vs. Acorn

Join Robinhood using this link and we’ll both get a stock like Apple, Ford, or Sprint for free:

Due to the growing field of mobile investing it is now easier than ever to start building up a traded asset portfolio, no longer does one need the services of a stock broker, or a large sum of money t make an impact on the financial health of their future, all you need is $5 and a bank card!

Though their are many brokerages that extend their services to mobile investors there are three main ones I’m going to focus on as I have spent a fairly extensive amount of time, while I currently only use the services from Robinhood I will say I profited modestly but profited nonetheless by the time I drew my money out.

Well here they are the pros and cons.


+ fastest trading capabilities

+ widest stock selection

+ allows the use of leverage

+ allows users to trade cryptocurrency(depending on area)

+ allows complex buying options like short sales, limit sales, and limit buys

+ phenomenal market data and articles


– Hardest to use

– requires most involvement

– very tempting to over trade

Overall: I would highly recommend to anyone who has a moderate understanding of the stock market, wishes to make a dedicated hobby out of it, wants high control over their portfolio, or has highly specific companies they wish to invest in.

People who are new to investing, or don’t have as much of an interest in going in depth may find the others much more desirable

2. Stash:

+ ETF focused

+ Lower Risk factor

+ Very informational for beginners

+ Simple and easy to learn

+ Beautiful UI

+ Fractional shares

– Smaller stock selection

– delayed trading times

– confusing category system

Overall: in the middle in terms of user difficulty, available features, and portfolio control.


+ Extreme simplicity

+ Automatic portfolio builds

+ Requires minimal knowledge

+ Offers Credit line

+ Allows accounts made for children

+ Reward per purchase system for linking credit/bank cards

+ Soothing UI

– minimal portfolio control (presets only)

– no trading

– teaches very little about investing

Overall: Great for absolute beginners or for those who just wanna invest without care of how it works.

In my experience Acorn also pairs great in combination with stash and or Robinhood since acorn is a low risk long term service and The others are a little more risk, choice, and trade oriented so using more than one in unison higher may potentially be a great way to diversify ones overall holdings.

Let us know in the comments which one(s) you use or have used and why!

Join Robinhood using this link and we’ll both get a stock like Apple, Ford, or Sprint for free:

The Philosophy of everything part 1.

Why is everything so different from anything? If one could be anything than couldn’t they just opt to be everything? Is anything a thing? I suppose it could be closer to an idea, but even than surely an idea is a thing.

What about nothing? If one could choose to be anything surely they could choose to be nothing since anything consists of any possible thing. but this would imply nothing is a thing which is quite contradictory to its blatant meaning of not being a thing.

If nothing is not a thing than how could it be accessible within the parameters of anything? I suppose it could be an anti-thing.

If an antihero is a hero with non heroic qualities could an anti thing be a thing which lacks the qualities of a thing?

Well by this technicality nothing would be a place holder for the absence of a thing, and a place holder is a thing so surely it must be an anti-thing because while it lacks the qualities of a thing it somehow marginalizes itself into the category.

Alright enough of my nonsense, peace out