Profile cover photo
Profile photo
LocalBiz Lift
9 followers -
Results-driven Local Marketing Solutions for Miami Businesess
Results-driven Local Marketing Solutions for Miami Businesess

9 followers
About
Posts

Post has attachment
Posted by KameronJenkins

When Google says they prefer comprehensive, complete content, what does that really mean? In this week's episode of Whiteboard Friday, Kameron Jenkins explores actionable ways to translate the demands of the search engines into valuable, quality content that should help you rank.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hey, guys. Welcome to this week's edition of Whiteboard Friday. My name is Kameron Jenkins, and I work here at Moz.

Today we're going to be talking about the quality of content comprehensiveness and what that means and why sometimes it can be confusing. I want to use an example scenario of a conversation that tends to go on between SEOs and Google. So here we go.

An SEO usually says something like, "Okay, Google, you say you want to rank high-quality content. But what does that really mean? What is high quality, because we need more specifics than that."

Then Google goes, "Okay, high quality is something that's comprehensive and complete. Yeah, it's really comprehensive." SEOs go, "Well, wait. What does that even mean?"

That's kind of what this was born out of. Just kind of an explanation of what is comprehensive, what does Google mean when they say that, and how that differs depending on the query.

Here we have an example page, and I'll kind of walk you through it. It's just going to serve to demonstrate why when Google says "comprehensive," that can mean something different for an e-commerce page than it would for a history of soccer page. It's really going to differ depending on the query, because people want all sorts of different kinds of things. Their intent is going to be different depending on what they're searching in Google. So the criteria is going to be different for comprehensiveness. So hopefully, by way of example, we'll be able to kind of walk you through what comprehensiveness looks like for this one particular query. So let's just dive in.

1. Intent

All right. So first I'm going to talk about intent. I have here a Complete Guide to Buying a House. This is the query I used as an example. Before we dive in, even before we look into keyword research tools or anything like that, I think it's really important to just like let the query sit with you for a little bit. So "guide to buying a house," okay, I'm going to think about that and think about what the searcher probably wanted based on the query.

So first of all, I noticed "guide." The word "guide" to me makes it sound like someone wants something very complete, very thorough. They don't just want quick tips. They don't want a quick bullet list. This can be longer, because someone is searching for a comprehensive guide.

"To buying a house," that's a process. That's not like an add-to-cart like Amazon. It's a step-by-step. There are multiple phases to that type of process. It's really important to realize here that they're probably looking for something a little lengthier and something that is maybe a step-by-step process.

And too, you just look at the query, "guide to buying a house," people are probably searching that if they've never bought a house before. So if they've never bought a house before, it's just good to remember that your audience is in a phase where they have no idea what they're doing. It's important to understand your audience and understand that this is something that they're going to need very, very comprehensive, start-to-finish information on it.

2. Implications

Two, implications. This is again also before we get into any keyword research tools. By implications, I mean what is going to be the effect on someone after reading this? So the implications here, a guide to buying a house, that is a big financial decision. That's a big financial purchase. It's going to affect people's finances and happiness and well-being, and Google actually has a name for that. In their Quality Rater Guidelines, they call that YMYL. So that stands for "your money, your life."

Those types of pages are held to a really high standard, and rightfully so. If someone reads this, they're going to get advice about how to spend their money. It's important for us, as SEOs and writers crafting these types of pages, to understand that these are going to be held to a really high standard. I think what that could look like on the page is, because they're making a big purchase like this, it might be a good sign of trustworthiness to maybe have some expert quotes in here. Maybe you kind of sprinkle those throughout your page. Maybe you actually have it written by an expert author instead of just Joe Schmoe blogger. Those are just some ideas for making a page really trustworthy, and I think that's a key to comprehensiveness.

3. Subtopics

Number three here we have subtopics. There are two ways that I'll walk you through finding subtopics to fit within your umbrella topic. I'm going to use Mo…
Add a comment...

Post has attachment
Want to run your own home shopping network? Facebook is now testing a Live video feature for merchants that lets them demo and describe their items for viewers. Customers can screenshot something they want to buy and use Messenger to send it to the seller, who can then request payment right through the chat app.

Facebook confirms the new shopping feature is currently in testing with a limited set of Pages in Thailand, which has been a testbed for shopping features. The option was first spotted by social media and reputation manager Jeff Higgins, and re-shared by Matt Navarra and Social Media Today. But now Facebook is confirming the test’s existence and providing additional details.

The company tells me it had heard feedback from the community in Thailand that Live video helped sellers demonstrate how items could be used or worn, and provided richer understanding than just using photos. Users also told Facebook that Live’s interactivity let customers instantly ask questions and get answers about product specifications and details. Facebook has looked to Thailand to test new commerce experiences like home rentals in Marketplace, as the country’s citizens were quick to prove how Facebook Groups could be used for peer-to-peer shopping. “Thailand is one of our most active Marketplace communities” says Mayank Yadav, Facebook Product Manager for Marketplace.

Now it’s running the Live shopping test, which allows Pages to notify fans that they’re going broadcasting to “showcase products and connect with your customers”. Merchants can take reservations and request payments through Messenger.  Facebook tells me it doesn’t currently have plans to add new partners or expand the feature. But some sellers without access are being invited to join a waitlist for the feature. It also says it’s working closely with its test partners to gather feedback and iterate on the live video shopping experience, which would seem to indicate it’s interested in opening the feature more widely if it performs well.

Facebook doesn’t take a cut of payments through Messenger, but the feature could still help earn the company money at a time when it’s seeking revenue streams beyond News Feed ads as it runs out of space their, Stories take over as the top media form, and user growth plateaus. Hooking people on video viewing helps Facebook show lucrative video ads. The more that Facebook can train users to buy and sell things on its app, the better the conversion rates will be for businesses, and the more they’ll be willing to spend on ads. Facebook could also convince sellers who broadcast Live to buy its new Marketplace ad units to promote their wares. And Facebook is happy to snatch any use case from the rest of the internet, whether it’s long-form video viewing or job applications or shopping to boost time on site and subsequent ad views.

Increasingly, Facebook is setting its sights on Craigslist, Etsy, and eBay. Those commerce platforms have failed to keep up with new technologies like video and lack the trust generated by Facebook’s real name policy and social graph. A few years ago, selling something online meant typing up a generic description and maybe uploading a photo. Soon it could mean starring in your own infomercial.

[Postcript: And a Facebook home shopping network could work perfectly on its new countertop smart display Portal.]
Add a comment...

Post has attachment
Posted by MiriamEllis

This article was written jointly in partnership with Kameron Jenkins. You can enjoy her previous articles here.

When you’ve accomplished step one in your local search marketing, how do you take step two?

You already know that any local business you market has to have the table stakes of accurate structured citations on major platforms like Facebook, Yelp, Infogroup, Acxiom, and YP.

But what can local SEO practitioners do once they’ve got these formal listings created and a system in place for managing them? Our customers often come to us once they’ve gotten well underway with Moz Local and ask, “What’s next? What can I do to move the needle?” This blog post will give you the actionable strategy and a complete step-by-step tutorial to answer this important question.

A quick refresher on citations

Listings on formal directories are called “structured citations.” When other types of platforms (like online news, blogs, best-of lists, etc.) reference a local business’ complete or partial contact information, that’s called an “unstructured citation.” And the best unstructured citations of all include links, of course!

For example, the San Francisco branch of a natural foods grocery store gets a linked unstructured citation from a major medical center in their city via a blog post about stocking a pantry with the right ingredients for healthier eating. Google and consumers encounter this reference and understand that trust and authority are being conveyed and earned.

The more often websites that are relevant to your location or industry link to you within their own content, the better your chances of ranking well in Google’s organic and local search engine results.

Why linked unstructured citations are growing in importance right now

Link building is as old as organic SEO. Structured citation building is as old as local SEO. Both practices have long sought to influence Google rankings. But a close read of the local search marketing community these days points up an increasing emphasis on the value of unstructured citations. In fact, local links were one of the top three takeaways from the 2018 Local Search Ranking Factors survey. Why is this?

Google has become the dominant force in local consumer experiences, keeping as many actions as possible within their own interface instead of sending searchers to company websites. Because links influence rank within that interface, most local businesses enterprises will need to move beyond traditional structured citations to impress Google with mentions on a diverse variety of relevant websites. While structured citations are rightly referred to as “table stakes” for all local businesses, it’s the unstructured ones that can be competitive difference-makers in tough markets.

Meanwhile, Google is increasingly monetizing local search results. A prime example of this is their Local Service Ads (LSA) program which acts as lead gen between Google and service area businesses like plumbing and housekeeping companies. Savvy local brands (including brick-and-mortar models) will see the way the wind is blowing with this and work to form non-Google-dependent sources of traffic and lead generation. A good linked unstructured citation on a highly relevant publication can drive business without having to pay Google a dime.

Your goal with linked unstructured citations is to build your community footprint and your authority simultaneously. All you need is the right tools for the research phase!

Fishing for opportunities with Link Intersect

For the sake of this tutorial, let’s choose at random a small B&B in Albuquerque — Bottger.com — as our hypothetical client. Let’s say that the innkeeper wants to know how the big Tribal resort casinos are earning publicity and links, in the hopes of finding opportunities for a smaller hospitality business, too. *Note that these aren’t absolutely direct competitors, but they share a city and an overall industry.

We’re going to use Moz’s Link Intersect tool to do this research for Bottger Mansion. This tool could help Bottger uncover all kinds of links and unstructured linked citation opportunities, depending on how it’s used. For example, the tool could surface:

Links that direct or near-direct competitors have, but that Bottger doesn’t

Locally relevant links from domains/pages about Bottger’s locale

Industry-relevant links from domains/pages about the hospitality industry

Step 1: Find the "big fish"

A client may already know who the “big fish” in their community are, or you can cast a net by identifying popular local events and seeing which businesses sponsor them. Sponsorships can be pricey, depending on the event, so if a local company sponsors a big event, it’s an indication that they're a larger enterprise with the budget to pursue a wide array of creative PR ideas. Larger enterprises can serve as models for small business emulation, at scale.

In our case …
Add a comment...

Post has attachment
A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.

Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.

Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.

Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.

Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.

Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

“As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I…
Add a comment...

Post has attachment
Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1 which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

Facebook had previously enforced that policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging, or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

The move will significantly reduce the platform risk of building on the Facebook platform. It could also cast it in a better light in the eyes of regulators. Anyone seeking ways Facebook abuses its dominance will lose a talking point. And by creating a more fair and open platform where developers can build without fear of straying too close to Facebook’s history or roadmap, it could reinvigorate its developer ecosystem.

A Facebook spokesperson provided this statement to TechCrunch:

“We built our developer platform years ago to pave the way for innovation in social apps and services. At that time we made the decision to restrict apps built on top of our platform that replicated our core functionality. These kind of restrictions are common across the tech industry with different platforms having their own variant including YouTube, Twitter, Snap and Apple. We regularly review our policies to ensure they are both protecting people’s data and enabling useful services to be built on our platform for the benefit of the Facebook community. As part of our ongoing review we have decided that we will remove this out of date policy so that our platform remains as open as possible. We think this is the right thing to do as platforms and technology develop and grow.”

The change comes after Facebook locked down parts of its platform in April for privacy and security reasons in the wake of the Cambridge Analytica scandal. Diplomatically, Facebook said it didn’t expect the change to impact its standing with regulators but it’s open to answering their questions.

Facebook shouldn’t block you from finding friends on competitors

Earlier in April, I wrote a report on how Facebook used Policy 4.1 to attack competitors it saw gaining traction. The article “Facebook shouldn’t block you from finding friends on competitors” advocated for Facebook to make its social graph more portable and interoperable so users could decamp to competitors if they felt they weren’t treated right in order for to coerce Facebook to act better.

The policy change will apply retroactively. Old apps that lost Find Friends or other functionality will be able to submit their app for review and once approved, will regain access.

Friend lists still can’t be exported in a truly interoperable way. But at least now Facebook has enacted the spirit of that call to action. Developers won’t be in danger of losing access to that Find Friends Facebook API for treading in its path.

 

Below is an excerpt from our previous reporting on how Facebook has previously enforced Platform Policy 4.1 that before today’s change was used to hamper competitors:

Voxer was one of the hottest messaging apps of 2012, climbing the charts and raising a $30 million round with its walkie-talkie-style functionality. In early January 2013, Facebook copied Voxer by adding voice messaging into Messenger. Two weeks later, Facebook cut off Voxer’s Find Friends access. Voxer CEO Tom Katis told me at the time that Facebook stated his app with tens of millions of users was a “competitive social network” and wasn’t sharing content back to Facebook. Katis told us he thought that was hypocritical. By June, Voxer had pivoted toward business communications, tumbling down the app charts and leaving Facebook Messenger to thrive.

MessageMe had a well-built chat app that was growing quickly after launching in 2013, posing a threat to Facebook Messenger. Shortly before reaching 1 million users, Facebook cut off MessageMe‘s Find Friends access. The app ended up selling for a paltry double-digit millions price tag to Yahoo before disintegrating.

Phhhoto and its fate show how Facebook’s data protectionism encompasses Instagram. Phhhoto’s app that let you shoot animated GIFs was growing popular. But soon after it hit 1 million users, it got cut off from Instagram’s social graph in April 2015. Six months later, Instagram launched Boomerang, a blatant clone of Phhhoto. Within two years, Phhhoto shut down its app, blaming Facebook and Instagram. “We watched [Instagram CEO Kevin] Systrom and his product team quietly using PHHHOTO almost a year before Boomerang was released. So it wasn’t a surprise at all . . . I’m not sure Instagram has a creative bone in their en…
Add a comment...

Post has attachment
This is the most complete list of link building strategies on the Web. Period.

In fact, you’ll find 178 strategies, tips and tactics on this page.

So if you’re looking to build powerful backlinks, you’ll really enjoy this list.

I want strategies that are:

Beginner

Intermediate

Advanced

Show only Brian's favorite strategies:

Yes

No

Beginner techniques

Alumni Lists and Directories

BeginnerNo

Most college sites (or standalone alumni websites) have a section of their site dedicated to their alumni. And some of them link out.

For example, here’s a business listing (with a link) on the SMU Alumni site.

Ask People You Know for Links

BeginnerNo

This can be friends, relatives, employees, colleagues, business partners, clients… just about anyone.

More and more people are creating their own sites and blogs (or know people that do).

That said: you really only want to get links from relevant websites. If it’s not relevant, it’s not going to have much of an impact. Plus, these people might be (rightly) hesitant to link to your jewelry store from their football blog.

Be Specific With Your Outreach

BeginnerYes

Don’t be afraid to (gently) let your outreach targets know exactly where you want your link to go.

This isn’t being pushy: it’s considerate. Otherwise you force them to figure out where your link should go.

Here’s a real life example of a very specific outreach email:

Better Business Bureau

BeginnerNo

Links from the BBB are now all nofollowed. And Google has said that getting listed on the BBB doesn’t directly help your SEO. That said, if you believe that getting listed on the BBB website itself has some SEO value, it might be worthwhile.

The price of a BBB listing is determined by region and by number of employees. For example, St. Louis BBB ranges from $370 for 1-3 employees all the way to $865+ for 100-200 employees. Anything over that, as well as additional websites, constitutes as additional charges.

Either way, you are SUPPOSED to get a link of some kind out of all of this. You need to check on your listing once it is published as each region has their own rules regarding their directory. There have been some instances where a business’ website URL in the directory listing was NOT a live link, only text. All you have to do is contact your BBB representative and ask for that to be changed.

Blog Commenting

BeginnerNo

Do blog comments directly lead to dofollow links? No.

But they’re an awesome way to get on a blogger’s radar screen… which CAN lead to links.

For example, in the early days of Backlinko, I’d comment on marketing and SEO blogs all the time:

And this helped me build relationships with bloggers in my niche. And weeks or months later, I noticed some bloggers spontaneously linking to me. And others ask me to guest post on their site.

Blog Directories

BeginnerNo

If you have a blog, you can submit it to various blog directories.

For example, here’s a link to my blog from AllTop:

Chamber of Commerce

BeginnerNo

Getting a link from your Chamber of Commerce is a guaranteed link just waiting for you to get. In some cases, though, it takes a little bit of time to find the right person to get in touch with.

Company Directory Submissions

BeginnerNo

Just like general web directories, you can submit your site to general company directories.

Just like with most submission-based tactics, focus on getting links from highly-relevant sites. For example, are you startup in NUC? Then this business directory would be a solid link.

Contribute to Crowdsourced Posts

BeginnerYes

Unless you’re insanely busy, always say “YES!” to crowdsourced post invites. They usually ask you stuff you already know. So it should only take you 5-10 minutes to write a response.

For example, here’s a link that I got from a crowdsourced post a while back:

Create an RSS feed

BeginnerNo

If your blog runs on any popular Content Management System (like WordPress) you probably already have an RSS feed. If you don’t, create one.

How does an RSS feed help with link building? it’s simple. There are sites out there that will scrape your content (stealing it without permission). And they find your content via your RSS feed. Just make sure to include internal links to other pages on your site in your content. That way, even if the scrapers don’t link to your original post, they’ll at least copy your internal links.

Here’s an example of a scraper site that scraped my content… including my internal links:

Create Shoulder Niche Content

IntermediateYes

In a boring niche? Well, it’s still possible to get links. You just need to be creative.

For example, one industry study found that “tangential content” (content not directly related to what a site sells) resulted in 30% more links and 77% more social shares:
Add a comment...

Post has attachment
What’s one thing that you are constantly seeing on the web? Especially if you are on Instagram, Facebook, and YouTube?

Come on, take a guess…

No, I am not talking about people taking half-naked selfies of themselves or posting their lunches. I’m talking about people showing off. From taking pictures of their cars or money and even their homes to standing in front of private jets and yachts.

You know… one of those images like the one above. And if you are wondering, that isn’t my car. A friend took that picture of me when I was at the race track… heck I don’t even drive anymore (or have any more hair!).

But do you want to know a little secret?

The loudest one in the room is the weakest one in the room

Now, I didn’t come up with that quote. It’s from the movie American Gangster that stars Denzel Washington.

But sadly, that doesn’t stop people from taking advice from all of the “loud” marketers our there showing off.

But I’ll let you on in a little secret…

People who really have money don’t need to run ads showing off how much cash they have and they surely don’t care what others think about them.

I learned this from my parents, as well as a few other valuable things.

So what did my parents teach me?

I didn’t grow up with money, and I didn’t have rich parents. My first job was picking up trash, cleaning restrooms, and sweeping up vomit at a theme park.

And I don’t want anyone to feel sorry for me either. My life wasn’t bad at all. I didn’t grow up poor either.

My parents worked really hard as immigrants and eventually, they were able to provide a middle-class lifestyle for me and my sister.

But as I was growing up, my parents taught me that showing off only draws more attention and causes problems.

That’s why I don’t have “lifestyle” photos of myself. Heck, I don’t really even talk much about my personal life as I prefer to keep things private… as much as possible at least. That’s the main reason I don’t use Instagram.

See, when I was growing up, I was thankful for whatever I had.

When I was growing up, that’s the car my parents gave me to drive. Luckily for me, my parents were generous enough not to make me pay for the car or even the gas.

Sure, the car had a sticker in the back window promoting my mom’s daycare business at the time, but I didn’t mind. When I would go to business meetings people would make fun of me but that didn’t bother me either.

Want to know why? I had a free car.
Add a comment...

Post has attachment
This holiday season, Facebook is hoping you’ll use a relatively little-known feature to share your gift ideas.

With collections, users can already save Facebook content — whether it’s a post, an ad, a video in Facebook Watch or a listing on the Marketplace. Now the company says that you’ll be able to share these collections with your Facebook friends.

The idea is to turn collections into more of a collaborative tool. To do so, you’ll need to open up a collection and then click the “invite” button. Then you can invite other users to become contributors to that collection.

A Facebook blog post explains how this collaboration might work:

If you and a group of friends are planning a holiday party, one of them can create a collection called “holiday recipes” and share with each person helping to plan. Those invited can add holiday recipes they’ve discovered on Facebook and save in the shared collection. The possibilities extend beyond the holiday season and can be useful for coordinating with friends on things like summer vacation planning, wedding registry ideas, furnishing a new apartment and more.

If you had no idea that this feature existed before now, I’m right there with you. Apparently Facebook has been testing “save” capabilities since 2014, which (quietly) evolved into the collections feature last year.

The company says “millions” of people are already using collections. Now that they’re becoming more of a social tool, it seems that Facebook is ready to do more to promote them.
Add a comment...

Post has attachment
Tumblr, a microblogging service that’s impact on internet culture has been massive and unique, is preparing for a massive change that’s sure to upset many of its millions of users.

On December 17, Tumblr will be banning porn, errr “adult content,” from its site and encouraging users to flag that content for removal. Existing adult content will be set to a “private mode” viewable only to the original poster.

What does “adult content” even mean? Well, according to Tumblr, the ban means the removal of any media that depicts “real-life human genitals or female-presenting nipples, and any content—including photos, videos, GIFs and illustrations—that depicts sex acts.”

This is a lot more complicated than just deleting some hardcore porn from the site; over the past several years Tumblr has become a hub for communities and artists with more adult themes. This has largely been born out of the fact that adult content has been disallowed from other multimedia-focused social platforms. There are bans on nudity and sexual content on Instagram and Facebook, though Twitter has more relaxed standards.

Why now? The Tumblr app was removed from the iOS app store several weeks ago due to an issue with its content filtering that led the company to issue a statement. “We’re committed to helping build a safe online environment for all users, and we have a zero tolerance policy when it comes to media featuring child sexual exploitation and abuse,” the company had detailed. “We’re continuously assessing further steps we can take to improve and there is no higher priority for our team.”

We’ve reached out to Tumblr for further comment.

Update: In a blog post titled “A better, more positive Tumblr,” the company’s CEO Jeff D’Onofrio minimized claims that the content ban was related to recent issues surrounding child porn, and is instead intended to make the platform one “where more people feel comfortable expressing themselves.”

“As Tumblr continues to grow and evolve, and our understanding of our impact on our world becomes clearer, we have a responsibility to consider that impact across different age groups, demographics, cultures, and mindsets,” the post reads. “Bottom line: There are no shortage of sites on the internet that feature adult content. We will leave it to them and focus our efforts on creating the most welcoming environment possible for our community.”

The imminent “adult content” ban will not apply to media connected with breastfeeding, birth or more general “health-related situations” like surgery, according to the company.

Tumblr is attempting to make aims to minimize the impact on the site’s artistic community as well, but this level of nuance is going to be incredibly difficult for them to enforce uniformly and will more than likely lead to a lot of frustrated users being told that their content does not qualify as “art.”

Tumblr is also looking to minimize impact on the more artistic storytelling, “such as erotica, nudity related to political or newsworthy speech, and nudity found in art, such as sculptures and illustrations, are also stuff that can be freely posted on Tumblr.”

I don’t know how much it needs to be reiterated that child porn is a major issue plaguing the web, but a blanket ban on adult content on a platform that has gathered so many creatives working with NSFW themes is undoubtedly going to be a pretty controversial decision for the company.
Add a comment...

Post has attachment
Posted by Jeff_Baker

Grab yourself a cup of coffee (or two) and buckle up, because we’re doing maths today.

Again.

Back it on up...

A quick refresher from last time: I pulled data from 50 keyword-targeted articles written on Brafton’s blog between January and June of 2018.

We used a technique of writing these articles published earlier on Moz that generates some seriously awesome results (we’re talking more than doubling our organic traffic in the last six months, but we will get to that in another publication).

We pulled this data again… Only I updated and reran all the data manually, doubling the dataset. No APIs. My brain is Swiss cheese.

We wanted to see how newly written, original content performs over time, and which factors may have impacted that performance.

Why do this the hard way, dude?

“Why not just pull hundreds (or thousands!) of data points from search results to broaden your dataset?”, you might be thinking. It’s been done successfully quite a few times!

Trust me, I was thinking the same thing while weeping tears into my keyboard.

The answer was simple: I wanted to do something different from the massive aggregate studies. I wanted a level of control over as many potentially influential variables as possible.

By using our own data, the study benefited from:

The same root Domain Authority across all content.

Similar individual URL link profiles (some laughs on that later).

Known original publish dates and without reoptimization efforts or tinkering.

Known original keyword targets for each blog (rather than guessing).

Known and consistent content depth/quality scores (MarketMuse).

Similar content writing techniques for targeting specific keywords for each blog.

You will never eliminate the possibility of misinterpreting correlation as causation. But controlling some of the variables can help.

As Rand once said in a Whiteboard Friday, “Correlation does not imply causation (but it sure is a hint).”

Caveat:

What we gained in control, we lost in sample size. A sample size of 96 is much less useful than ten thousand, or a hundred thousand. So look at the data carefully and use discretion when considering the ranking factors you find most likely to be true.

This resource can help gauge the confidence you should put into each Pearson Correlation value. Generally, the stronger the relationship, the smaller sample size needed to be be confident in the results.

So what exactly have you done here?

We have generated hints at what may influence the organic performance of newly created content. No more, and no less. But they are indeed interesting hints and maybe worth further discussion or research.

What have you not done?

We have not published sweeping generalizations about Google’s algorithm. This post should not be read as a definitive guide to Google’s algorithm, nor should you assume that your site will demonstrate the same correlations.

So what should I do with this data?

The best way to read this article, is to observe the potential correlations we observed with our data and consider the possibility of how those correlations may or may not apply to your content and strategy.

I’m hoping that this study takes a new approach to studying individual URLs and stimulates constructive debate and conversation.

Your constructive criticism is welcome, and hopefully pushes these conversations forward!

The stat sheet

So quit jabbering and show me the goods, you say? Alright, let’s start with our stats sheet, formatted like a baseball card, because why not?:

*Note: Only blogs with complete ranking data were used in the study. We threw out blogs with missing data rather than adding arbitrary numbers.

And as always, here is the original data set if you care to reproduce my results.

So now the part you have been waiting for...

The analysis

To start, please use a refresher on the Pearson Correlation Coefficient from my last blog post, or Rand’s.

1. Time and performance

I started with a question: “Do blogs age like a Macallan 18 served up neat on a warm summer Friday afternoon, or like tepid milk on a hot summer Tuesday?”

Does the time indexed play a role in how a piece of content performs?

Correlation 1: Time and target keyword position

First we will map the target keyword ranking positions against the number of days its corresponding blog has been indexed. Visually, if there is any correlation we will see some sort of negative or positive linear relationship.

There is a clear negative relationship between the two variables, which means the two variables may be related. But we need to go beyond visuals and use the PCC.

Days live vs. target keyword position

PCC

-.343

Relationship

Moderate

The data shows a moderate relationship between how long a blog has been indexed and the positional ranking of the target keyword.

But before getting carried away, we shouldn’t solely trust one statistical method…
Add a comment...
Wait while more posts are being loaded