AI Content Chat (Beta) logo

Startup Tools: Cloud Services and Tools

A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their website accessible via the World Wide Web. Web hosts are companies that provide space on a server owned or leased for use by clients, as well as providing Internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the Internet for other servers located in their data center, called colocation, also known as Housing in Latin America or France.

The scope of web hosting services varies greatly. The most basic is web page and small-scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web "as is" or with minimal processing. Many Internet service providers (ISPs) offer this service free to subscribers. Individuals and organizations may also obtain Web page hosting from alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or inexpensive. Business web site hosting often has a higher expense depending upon the size and type of the website.

Single page hosting is generally sufficient for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g. PHP, Java, Ruby on Rails, ColdFusion, or ASP.NET). These facilities allow customers to write or install scripts for applications like forums and content management. Also, Secure Sockets Layer (SSL) is typically used for e-commerce.

The host may also provide an interface or control panel for managing the Web server and installing scripts, as well as other modules and service applications like e-mail. Some hosts specialize in certain software or services (e.g. e-commerce), which are commonly used by larger companies that outsource network infrastructure.

The availability of a website is measured by the percentage of a year in which the website is publicly accessible and reachable via the internet. This is different than measuring the uptime of a system. Uptime refers to the system itself being online, however it does not take into account being able to reach it as in the event of a network outage.

The formula to determine a system's availability is relatively easy: Total time = 365 days per year * 24 hours per day * 60 minutes per hour = 525,600 minutes per year. To calculate how many minutes of downtime a system may experience per year, take the uptime guarantee and multiply it by total time in a year.

In the example of 99.99%: (1 - .9999) * 525,600 = 52.56 allowable minutes down per year.

The following table shows the translation from a given availability percentage to the corresponding amount of time a system would be unavailable per year, month, or week. Thus we can get an idea of the time utilised.

Availability %Downtime per yearDowntime per month*Downtime per week
*For monthly calculations, a 30-day month is used.

A hosting provider's SLAs may include a certain amount of scheduled downtime per year in order to perform maintenance on the systems. This scheduled downtime is often excluded from the SLA timeframe, and needs to be subtracted from the Total Time when availability is calculated. Depending on the verbiage of an SLA, if the availability of a system drops below that in the signed SLA, a hosting provider often will provide a partial refund for time lost.

Internet hosting services can run Web servers.

Many large companies that are not internet service providers need to be permanently connected to the web to send email, files, etc. to other sites. The company may use the computer as a website host to provide details of their goods and services and facilities for online orders.

  • Free web hosting service: offered by different companies with limited services, sometimes supported by advertisements, and often limited when compared to paid hosting.
  • Shared web hosting service: one's website is placed on the same server as many other sites, ranging from a few to hundreds or thousands. Typically, all domains may share a common pool of server resources, such as RAM and the CPU. The features available with this type of service can be quite basic and not flexible in terms of software and updates. Resellers often sell shared web hosting and web companies often have reseller accounts to provide hosting for clients.
  • Reseller web hosting: allows clients to become web hosts themselves. Resellers could function, for individual domains, under any combination of these listed types of hosting, depending on who they are affiliated with as a reseller. Resellers' accounts may vary tremendously in size: they may have their own virtual dedicated server to a colocated server. Many resellers provide a nearly identical service to their provider's shared hosting plan and provide the technical support themselves.
  • Virtual Dedicated Server: also known as a Virtual Private Server (VPS), divides server resources into virtual servers, where resources can be allocated in a way that does not directly reflect the underlying hardware. VPS will often be allocated resources based on a one server to many VPSs relationship, however virtualisation may be done for a number of reasons, including the ability to move a VPS container between servers. The users may have root access to their own virtual space. Customers are sometimes responsible for patching and maintaining the server.
  • Dedicated hosting service: the user gets his or her own Web server and gains full control over it (user has root access for Linux/administrator access for Windows); however, the user typically does not own the server. One type of Dedicated hosting is Self-Managed or Unmanaged. This is usually the least expensive for Dedicated plans. The user has full administrative access to the server, which means the client is responsible for the security and maintenance of his own dedicated server.
  • Managed hosting service: the user gets his or her own Web server but is not allowed full control over it (user is denied root access for Linux/administrator access for Windows); however, they are allowed to manage their data via FTP or other remote management tools. The user is disallowed full control so that the provider can guarantee quality of service by not allowing the user to modify the server or potentially create configuration problems. The user typically does not own the server. The server is leased to the client.
  • Colocation web hosting service: similar to the dedicated web hosting service, but the user owns the colo server; the hosting company provides physical space that the server takes up and takes care of the server. This is the most powerful and expensive type of web hosting service. In most cases, the colocation provider may provide little to no support directly for their client's machine, providing only the electrical, Internet access, and storage facilities for the server. In most cases for colo, the client would have his own administrator visit the data center on site to do any hardware upgrades or changes. Formerly, many colocation providers would accept any system configuration for hosting, even ones housed in desktop-style minitower cases, but most hosts now require rack mount enclosures and standard system configurations.
  • Cloud hosting: is a new type of hosting platform that allows customers powerful, scalable and reliable hosting based on clustered load-balanced servers and utility billing. A cloud hosted website may be more reliable than alternatives since other computers in the cloud can compensate when a single piece of hardware goes down. Also, local power disruptions or even natural disasters are less problematic for cloud hosted sites, as cloud hosting is decentralized. Cloud hosting also allows providers to charge users only for resources consumed by the user, rather than a flat fee for the amount the user expects they will use, or a fixed cost upfront hardware investment. Alternatively, the lack of centralization may give users less control on where their data is located which could be a problem for users with data security or privacy concerns.
  • Clustered hosting: having multiple servers hosting the same content for better resource utilization. Clustered Servers are a perfect solution for high-availability dedicated hosting, or creating a scalable web hosting solution. A cluster may separate web serving from database hosting capability. (Usually Web hosts use Clustered Hosting for their Shared hosting plans, as there are multiple benefits to the mass managing of clients).
  • Grid hosting: this form of distributed hosting is when a server cluster acts like a grid and is composed of multiple nodes.
  • Home server: usually a single machine placed in a private residence can be used to host one or more web sites from a usually consumer-grade broadband connection. These can be purpose-built machines or more commonly old PCs. Some ISPs actively attempt to block home servers by disallowing incoming requests to TCP port 80 of the user's connection and by refusing to provide static IP addresses. A common way to attain a reliable DNS host name is by creating an account with a dynamic DNS service. A dynamic DNS service will automatically change the IP address that a URL points to when the IP address changes.

Some specific types of hosting provided by web host service providers:

Web hosting is often provided as part of a general Internet access plan; there are many free and paid providers offering these types of web hosting.

A customer needs to evaluate the requirements of the application to choose what kind of hosting to use. Such considerations include database server software, scripting software, and operating system. Most hosting providers provide Linux-based web hosting which offers a wide range of different software. A typical configuration for a Linux server is the LAMP platform: Linux, Apache, MySQL, and PHP/ Perl/ Python. The web hosting client may want to have other services, such as email for their business domain, databases or multimedia services. A customer may also choose Windows as the hosting platform. The customer still can choose from PHP, Perl, and Python but may also use ASP .Net or Classic ASP. Web hosting packages often include a Web Content Management System, so the end-user does not have to worry about the more technical aspects.

Startup Tools: Cloud Services and Tools - Page 3

This is a high-level, technical description of how Heroku works. It ties together many of the concepts you'll encounter while writing, configuring, deploying and running applications on the Heroku platform.

Performing one of the Getting Started tutorials will make the concepts in this documentation more concrete.

Read this document sequentially: in order to tell a coherent story, it incrementally unveils and refines the concepts describing the platform.

The final section ties all the definitions together, providing a deploy-time and runtime-view of Heroku.

Defining an application

Heroku lets you deploy, run and manage applications written in Ruby, Node.js, Java, Python, Clojure, Scala and PHP.

An application is a collection of source code written in one of these languages, perhaps a framework, and some dependency description that instructs a build system as to which additional dependencies are needed in order to build and run the application.

Terminology (Preliminary): Applications consist of your source code and a description of any dependencies.

Dependency mechanisms vary across languages: in Ruby you use a Gemfile, in Python a requirements.txt, in Node.js a package.json, in Java a pom.xml and so on.

The source code for your application, together with the dependency file, should provide enough information for the Heroku platform to build your application, to produce something that can be executed.

Knowing what to execute

You don't need to make many changes to an application in order to run it on Heroku. One requirement is informing the platform as to which parts of your application are runnable.

If you're using some established framework, Heroku can figure it out. For example, in Ruby on Rails, it's typically rails server, in Django it's python <app>/manage.py runserver and in Node.js it's the main field in package.json.

For other applications, you may need to explicitly declare what can be executed. You do this in a text file that accompanies your source code - a Procfile. Each line declares a process type - a named command that can be executed against your built application. For example, your Procfile may look like this:

web: java -jar lib/foobar.jar $PORT
queuty: java -jar lib/queue-processor.jar

This file declares a web process type and provides the command that needs to be executed in order to run it (in this case, java -jar lib/foobar.jar $PORT). It also declares a queuty process type, and its corresponding command.

The earlier definition of an application can now be refined to include this single additional Procfile.

Terminology: Applications consist of your source code, a description of any dependencies, and a Procfile.

Heroku is a polyglot platform - it lets you build, run and scale applications in a similar manner across all the languages - utilizing the dependencies and Procfile. The Procfile exposes an architectural aspect of your application (in the above example there are two entry points to the application) and this architecture lets you, for example, scale each part independently. An excellent guide to architecture principles that work well for applications running on Heroku can be found in Architecting Applications for Heroku.

Deploying applications

Git is a powerful, distributed version control system that many developers use to manage and version source code. The Heroku platform uses git as the primary means for deploying applications.

When you create an application on Heroku, it associates a new git remote, typically named heroku, with the local git repository for your application.

As a result, deploying code is just the familiar git push, but to the heroku remote instead:

Terminology: Deploying applications involves sending the application to Heroku using git.

Deployment then, is about using git as a transport mechanism - moving your application from your local system to Heroku.

Building applications

When the Heroku platform receives a git push, it initiates a build of the source application. The build mechanism is typically language specific, but follows the same pattern, typically retrieving the specified dependencies, and creating any necessary assets (whether as simple as processing style sheets or as complex as compiling code).

For example, when the build system receives a Rails application, it may fetch all the dependencies specified in the Gemfile, as well as generate files based on the asset pipeline. A Java application may fetch binary library dependencies using Maven, compile the source code together with those libraries, and produce a JAR file to execute.

The source code for your application, together with the fetched dependencies and output of the build phase such as generated assets or compiled code, as well as the language and framework, are assembled into a slug.

These slugs are a fundamental aspect of what happens during application execution - they contain your compiled, assembled application - ready to run - together with the instructions (the Procfile) of what you may want to execute.

Running applications on dynos

Heroku executes applications by running a command you specified in the Procfile, on a dyno that's been preloaded with your prepared slug (in fact, with your release, which extends your slug and a few items not yet defined: config vars and add-ons).

Think of a running dyno as a lightweight, secure, virtualized Unix container that contains your application slug in its file system.

Generally, if you deploy an application for the first time, Heroku will run 1 web dyno automatically. In other words, it will boot a dyno, load it with your slug, and execute the command you've associated with the web process type in your Procfile.

You have control over how many dynos are running at any given time. Given the Procfile example earlier, you can start 5 dynos, 3 for the web and 2 for the queuty process types, as follows:

$  heroku ps:scale web=3 queuty=2

When you deploy a new version of an application, all of the currently executing dynos are killed, and new ones (with the new release) are started to replace them - preserving the existing dyno formation.

To understand what's executing, you just need to know what dynos are running which process types:

$  heroku ps
 == web: 'java lib/foobar.jar $PORT'
 web.1: up 2013/02/07 18:59:17 (~ 13m ago)
 web.1: up 2013/02/07 18:52:08 (~ 20m ago)
 web.2: up 2013/02/07 18:31:14 (~ 41m ago)
 == queuty: `java lib/queue-processor.jar`
 queuty.1: up 2013/02/07 18:40:48 (~ 32m ago)
 queuty.2: up 2013/02/07 18:40:48 (~ 32m ago)

Dynos then, are an important means of scaling your application. In this example, the application is well architected to allow for the independent scaling of web and queue worker dynos.

Config vars

An application's configuration is everything that is likely to vary between environments (staging, production, developer environments, etc.). This includes backing services such as databases, credentials, or environment variables that provide some specific information to your application.

Heroku lets you run your application with a customizable configuration - the configuration sits outside of your application code and can be changed independently of it.

The configuration for an application is stored in config vars. For example, here's how to configure an encryption key for an application:

$  heroku config:set ENCRYPTION_KEY= my_secret_launch_codes
 Adding config vars and restarting demoapp... done, v14
 ENCRYPTION_KEY:     my_secret_launch_codes

At runtime, all of the config vars are exposed as environment variables - so they can be easily extracted programatically. A Ruby application deployed with the above config var can access it by calling ENV["ENCRYPTION_KEY"].

All dynos in an application will have access to the exact same set of config vars at runtime.

Releases

Earlier, this article stated that to run your application on a dyno, the Heroku platform loaded the dyno with your most recent slug. This needs to be refined: in fact it loads it with the slug and any config variables you have assigned to the application. The combination of slug and configuration is called a release.

All releases are automatically persisted in an append-only ledger, making managing your application, and different releases, a cinch. Use the heroku releases command to see the audit trail of release deploys:

$  heroku releases
 == demoapp Releases
 v103 Deploy 582fc95  [email protected]   2013/01/31 12:15:35
 v102 Deploy 990d916  [email protected]   2013/01/31 12:01:12

The number next to the deploy message, for example 582fc95, corresponds to the commit hash of the repository you deployed to Heroku.

Every time you deploy a new version of an application, a new slug is created and release is generated.

As Heroku contains a store of the previous releases of your application, it's very easy to rollback and deploy a previous release:

$  heroku releases:rollback v102
 Rolling back demoapp... done, v102
 $  heroku releases
 == demoapp Releases
 v104 Rollback to v102 [email protected]   2013/01/31 14:11:33 (~15s ago)
 v103 Deploy 582fc95   [email protected]   2013/01/31 12:15:35
 v102 Deploy 990d916   [email protected]   2013/01/31 12:01:12

Making a material change to your application, whether it's changing the source or configuration, results in a new release being created.

A release then, is the mechanism behind how Heroku lets you modify the configuration of your application (the config vars) independently of the application source (stored in the slug) - the release binds them together. Whenever you change a set of config vars associated with your application, a new release will be generated.

Dyno manager

Part of the Heroku platform, the dyno manager, is responsible for keeping dynos running. For example, dynos are cycled at least once per day, or whenever the dyno manager detects a fault in the running application (such as out of memory exceptions) or problems with the underlying hardware that requires the dyno be moved to a new physical location.

This dyno cycling happens transparently and automatically on a regular basis, and is logged.

Because Heroku manages and runs applications, there's no need to manage operating systems or other internal system configuration. One-off dynos can be run with their input/output attached to your local terminal. These can also be used to carry out admin tasks that modify the state of shared resources, for example database configuration - perhaps periodically through a scheduler.

Here's the simplest way to create and attach to a one-off dyno:

$  heroku run bash
 Running `bash` attached to terminal... up, run.8963
 ~ $ ls

This will spin up a new dyno, loaded with your release, and then run the bash command - which will provide you with a unix shell (remember that dynos are effectively isolated virtualized unix containers). Once you've terminated your session, or after a period of inactivity, the dyno will be removed.

Changes to the filesystem on one dyno are not propagated to other dynos and are not persisted across deploys and dyno restarts. A better and more scalable approach is to use a shared resource such as a database or queue.

The ephemeral nature of the file system in a dyno can be demonstrated with the above command. If you create a one-off dyno by running heroku run bash, the Unix shell on the dyno, and then create a file on that dyno, and then terminate your session - the change is lost. All dynos, even those in the same application, are isolated - and after the session is terminated the dyno will be killed. New dynos are always created from a slug, not from the state of other dynos.

Add-ons

Applications typically make use of add-ons to provide backing services such as databases, queueing & caching systems, storage, email services and more. Add-ons are provided as services by Heroku and third parties - there's a large marketplace of add-ons you can choose from.

Heroku treats these add-ons as attached resources: provisioning an add-on is a matter of choosing one from the add-on marketplace, and attaching it to your application.

For example, here is how to add a Redis backing store add-on (by RedisToGo) to an application:

$  heroku addons:add redistogo:nano

Dynos do not share file state, and so add-ons that provide some kind of storage are typically used as a means of communication between dynos in an application. For example, Redis or Postgres could be used as the backing mechanism in a queue; then dynos of the web process type can push job requests onto the queue, and dynos of the queuty process type can pull jobs requests from the queue.

The add-on service provider is responsible for the service - and the interface to your application is often provided through a config var. In this example, a REDISTOGO_URL will be automatically added to your application when you provision the add-on. You can write code that connects to the service through the URL, for example:

Add-ons are associated with an application, much like config vars - and so the earlier definition of a release needs to be refined. A release of your applications is not just your slug and config vars; it's your slug, config vars as well as the set of provisioned add-ons.

Much like config vars, whenever you add, remove or change an add-on, a new release is created.

Logging and monitoring

Heroku treats logs as streams of time-ordered events, and collates the stream of logs produced from all of the processes running in all dynos, and the Heroku platform components, into the Logplex - a high-performance, real-time system for log delivery.

It's easy to examine the logs across all the platform components and dynos:

$  heroku logs
 2013-02-11T15:19:10+00:00 heroku[router]: at=info method=GET path=/articles/custom-domains host=mydemoapp.heroku.com fwd=74.58.173.188 dyno=web.1 queue=0 wait=0ms connect=0ms service=1452ms status=200 bytes=5783
 2013-02-11T15:19:10+00:00 app[web.2]: Started GET "/" for 1.169.38.175 at 2013-02-11 15:19:10 +0000
 2013-02-11T15:19:10+00:00 app[web.1]: Started GET "/" for 2.161.132.15 at 2013-02-11 15:20:10 +0000

Here you see 3 timestamped log entries, the first from Heroku's router, the last two from two dynos running the web process type.

You can also dive into the logs from just a single dyno, and keep the channel open, listening for further events:

$  heroku logs --ps web.1 --tail
 2013-02-11T15:19:10+00:00 app[web.2]: Started GET "/" for 1.169.38.175 at 2013-02-11 15:19:10 +0000

Logplex keeps a limited buffer of log entries solely for performance reasons. To persist them, and action events such as email notification on exception, use a Logging Add-on, which ties into log drains - an API for receiving the output from Logplex.

HTTP routing

Depending on your dyno formation, some of your dynos will be running the command associated with the web process type, and some will be running other commands associated with other process types.

The dynos that run process types named web are different in one way from all other dynos - they will receive HTTP traffic. Heroku's HTTP routers distributes incoming requests for your application across your running web dynos.

So scaling an app's capacity to handle web traffic involves scaling the number of web dynos:

A random selection algorithm is used for HTTP request load balancing across web dynos - and this routing handles both HTTP and HTTPS traffic. It also supports multiple simultaneous connections, as well as timeout handling.

Tying it all together

The concepts explained here can be divided into two buckets: those that involve the development and deployment of an application, and those that involve the runtime operation of the Heroku platform and the application after its deployed.

The following two sections recapitulate the main components of the platform, separating them into these two buckets.

Deploy

  • Applications consist of your source code, a description of any dependencies, and a Procfile.
  • Procfiles list process types - named commands that you may want executed.
  • Deploying applications involves sending the application to Heroku using git.
  • Buildpacks lie behind the slug compilation process. Buildpacks take your application, its dependencies, and the language runtime, and produce slugs.
  • A slug is a bundle of your source, fetched dependencies, the language runtime, and compiled/generated output of the build system - ready for execution.
  • Config vars contain customizable configuration data that can be changed independently of your source code. The configuration is exposed to a running application via environment variables.
  • Add-ons are third party, specialized, value-added cloud services that can be easily attached to an application, extending its functionality.
  • A release is a combination of a slug (your application), config vars and add-ons. Heroku maintains an append-only ledger of releases you make.

Runtime

  • Dynos are isolated, virtualized unix containers, that provide the environment required to run an application.
  • Your application's dyno formation is the total number of currently-executing dynos, divided between the various process types you have scaled.
  • The dyno manager is responsible for managing dynos across all applications running on Heroku.
  • Applications with only a single web dyno sleep after one hour of inactivity by the dyno manager. Scaling to multiple web dynos will avoid this.
  • One-off Dynos are temporary dynos that run with their input/output attached to your local terminal. They're loaded with your latest release.
  • Each dyno gets its own ephemeral filesystem - with a fresh copy of the most recent release. It can be used as temporary scratchpad, but changes to the filesystem are not reflected to other dynos.
  • Logplex automatically collates log entries from all the running dynos of your app, as well as other components such as the routers, providing a single source of activity.
  • Scaling an application involves varying the number of dynos of each process type.

Next steps

Startup Tools: Cloud Services and Tools - Page 5

What is the Buildpack Architecture in Pivotal CF?

Pivotal CF uses a flexible approach called buildpacks to dynamically assemble and configure a complete runtime environment for executing a particular type of applications. Since buildpacks are extensible to most modern runtimes and frameworks, applications written in nearly any language can be deployed to Pivotal CF.

Developers benefit from an "it just works" experience as the platform applies the appropriate buildpack to detect, download and configure the language, framework, container and libraries for the application.

Pivotal CF provided buildpacks for Java, Ruby, Node, PHP, Python and golang are part of a broad buildpack provider ecosystem that ensures constant updates and maintenance for virtually any language.

Containerization

Pivotal CF orchestrates multi-node containerized applications on your choice of IaaS, manage its lifecycle including stateful data, while providing monitoring, alerting and self-healing capabilities.

Combining the power of virtualization with efficient container scheduling, Pivotal CF delivers a higher server density than traditional environments.

Availability and Scaling

At the heart of Pivotal CF rapid application deployment and horizontal scaling capabilities is an innovative approach to real-time updating of a shared routing tier for all applications. In addition, every application in the system is instantly wired to a fault-tolerant array of software load balancers, which allows applications to meet peak demands with horizontal scale out/in.

4 Layers of High Availability

Elegant recovery mechanisms within Pivotal CF work in concert to provide self-healing capabilities for deployed applications as well as the cloud platform.

Four levels of HA built into the platform result in a solid foundation for business continuity in the enterprise.

  • Pivotal CF's 3rd generation application health manager automatically detects and recovers failed application instances when the actual state of an app instance does not match the desired state.
  • The system is also designed to detect, alert and auto recover processes running the platform components, should a failure occur.
  • In the event that the VM itself has failed, the system will automatically "resurrect" a VM and restart failed cluster components.
  • Lastly, application instances can be automatically deployed and distributed over multiple availability zones. Therefore, despite the loss of an entire zone, the system automatically adjusts to route requests to the running instances.

Monitoring, metrics and logs

Monitoring

Operators looking to monitor the health and performance of their Pivotal CF deployment can leverage Pivotal Ops Metrics which delivers typical machine metrics (CPU, memory, disk) and statistics for the various components of a Pivotal CF deployment.

This information can be integrated with existing monitoring and alerting infrastructure for proactive monitoring use cases such as expanding capacity of Pivotal CF components based on historical resource utilization.

Logging

The ability to deliver a unified log stream of application platform events with end user actions for root cause analysis and understanding end-to-end service delivery is key to unlocking the value of an organization's unstructured data. Pivotal CF delivers the ability to direct an aggregated log stream of application events, platform events and end user actions to built in clients like the Web Console dashboards, and publish the log stream for integration with external tools.

Roles Management

Pivotal CF provides a clean separation of Developer and Operator functions that segregate access of shared resources and apply organization wide governance models. Without sacrificing user experience, Operations teams using Pivotal CF can enact tight access and policy controls, including mapping to user authentication and authorization systems in the enterprise.

ŸFor example, Operators can set fine grained control of resources, dynamically change system behavior with Feature Flags to grant or restrict access to roles, and set default environment variables for every app.

Application Security Groups

Pivotal CF is the first platform to be able to provide an application-centric security approach. This ensures that your environment is secure across the entire spectrum. Pivotal CF security groups provide you with the ability to define a security access profile that follows your application across every instance within a defined group. This provides administrators with full application-level control of security and compliance.

Pivotal CF security groups provide a way to scale your applications quickly, and still retain your security posture. By linking your security profile to specific groups, we remove the need to map security rules individually to VLANs and VMs.

Services Ecosystem

Pivotal CF Data and Partner Services

Operators can now manage access to Marketplace services, and plans can be made available to all organizations or only to particular organizations.

Some examples of the types of services include:

  • MySQL for Pivotal (Relational database)
  • Pivotal HD for Pivotal (Hadoop)
  • RabbitMQ for Pivotal (Message bus)
  • Redis for Pivotal CF (Key-value cache/store)
  • RiakCS for Pivotal CF (S3 compatible object store)
  • MongoDB for Pivotal CF (NoSQL database)
  • CloudBees Enterprise Jenkins for Pivotal CF (Continuous integration) Download the datasheet

The services are integrated with Pivotal CF Operations manager to allow for full lifecycle management-from click through provisioning, consolidated logging for visibility and debugging to inflight updates and scaling. Operators can now manage access to Marketplace services and plans can be made available to all organizations or only to particular organizations.

How Does Pivotal CF Support Mobile Applications?

Pivotal CF Mobile Services include Push Notifications, API Gateway, and Data Sync that reduce latency, improve user experience, and simplify mobile development. Service details include:

  • Push Notifications. Relevant and contextual notifications sent to an individual's mobile device are an essential to building a great mobile app. While consumer apps have long used push notifications, enterprise apps can benefit as well. For instance, banks can notify customers about cash withdrawals, logistics companies can redirect drivers en-route, and so on. Push Notifications for Pivotal CF works with iOS, Android, and Microsoft mobile devices.
  • API Gateway. Developing mobile applications involves integrating with multiple backend systems and data stores. But many of these are not optimized for mobile application use, deliver far too much data for consumption on a mobile device, or are too chatty for use on low bandwidth mobile connections. API Gateway for Pivotal CF lets companies create a mobile-optimized API that reduces mobile app latency by shrinking network payloads and reducing round-trips and increases application resilience by gracefully handling unavailability of mobile API endpoints. This is critical since mobile application sessions can span areas of poor or no coverage.
  • Data Sync. Practically every application requires access to data. One notable example is session state data, such as the contents of a shopping cart, or a travel itinerary. Data Sync for Pivotal CF simplifies data access for mobile apps by providing a RESTful data access API to sync data between a mobile device and backend database, and does so in a secure manner, authenticating via Oauth2, Spring Security, and OpenID Connect.
Download the data sheet

Service Binding

Developers get instant, self-service access to a variety of popular services for new applications, testing, and hands-on experience including on demand Pivotal HD clusters as well as data in the enterprise Business Data Lake. Apps can bind to these services via the service broker, automatically reducing cycle time by eliminating the typical complexities around deployment, security, networking, and resource management. Bound services can be managed and monitored within Web console so that so developers and application owners can focus on writing code, not configuring infrastructure or middleware.

Additionally, with platform automatic provisioning, configuration, management and storage of service connection information, credentials and dependencies, the application can now be moved from development and test to staging and production environments with no changes.

IaaS Integration

Pivotal CF is the only PaaS that supports direct IaaS API integration for turnkey deployment and full life-cycle management. Cloud operators can use a simple interface for rapid deployment on any prevalent Infrastructure as a Service, both on premise (e.g. VMware vSphere) or in the public cloud (e.g. VMware vCloud Air).

With a few clicks, a cloud operator can:

  • Scale the platform
  • Manage the platform component resources
  • Provide continuous software updates and upgrades for Pivotal CF, including OS patches without application downtime

Modern Cloud Platform. Pivotal and VMware.

Enterprise-class capability

Run an enterprise-ready PaaS powered by the VMware virtualization platform that you already trust to run and manage your applications.

Hybrid flexibility

Build and run applications using a PaaS that supports the flexibility of on-premise, public cloud, and hybrid cloud deployment with vCloud Air.

Business agility

Enable business agility by accelerating application development and streamlining the delivery of underlying infrastructure.

Startup Tools: Cloud Services and Tools - Page 7

Maybe you're a Dropbox devotee. Or perhaps you really like streaming Sherlock on Netflix. For that, you can thank the cloud.

In fact, it's safe to say that Amazon Web Services (AWS) has become synonymous with cloud computing; it's the platform on which some of the Internet's most popular sites and services are built. But just as cloud computing is used as a simplistic catchall term for a variety of online services, the same can be said for AWS-there's a lot more going on behind the scenes than you might think.

If you've ever wanted to drop terms like EC2 and S3 into casual conversation (and really, who doesn't?) we're going to demystify the most important parts of AWS and show you how Amazon's cloud really works.

Elastic Cloud Compute (EC2)

Think of EC2 as the computational brain behind an online application or service. EC2 is made up of myriad instances, which is really just Amazon's way of saying virtual machines. Each server can run multiple instances at a time, in either Linux or Windows configurations, and developers can harness multiple instances-hundreds, even thousands-to handle computational tasks of varying degrees. This is what the elastic in Elastic Cloud Compute refers to; EC2 will scale based on a user's unique needs.

Instances can be configured as either Windows machines, or with various flavors of Linux. Again, each instance comes in different sizes, depending on a developer's needs. Micro instances, for example, only come with 613 MB of RAM, while Extra Large instances can go up to 15GB. There are also other configurations for various CPU or GPU processing needs.

Finally, EC2 instances can be deployed across multiple regions-which is really just a fancy way of referring to the geographic location of Amazon's data centers. Multiple instances can be deployed within the same region (on separate blocks of infrastructure called availability zones, such as US East-1, US East-2, etc.), or across more than one region if increased redundancy and reduced latency is desired

Elastic Load Balance (ELB)

Another reason why a developer might deploy EC2 instances across multiple availability zones and regions is for the purpose of load balancing. Netflix, for example, uses a number of EC2 instances across multiple geographic location. If there was a problem with Amazon's US East center, for example, users would hopefully be able to connect to Netflix via the service's US West instances instead.

But what if there is no problem, and a higher number of users are connecting via instances on the East Coast than on the West? Or what if something goes wrong with a particular instance in a given availability zone? Amazon's Elastic Load Balance allows developers to create multiple EC2 instances and set rules that allow traffic to be distributed between them. That way, no one instance is needlessly burdened while others idle-and when combined with the ability for EC2 to scale, more instances can also be added for balance where required.

Elastic Block Storage (EBS)

Think of EBS as a hard drive in your computer-it's where an EC2 instance stores persistent files and applications that can be accessed again over time. An EBS volume can only be attached to one EC2 instance at a time, but multiple volumes can be attached to the same instance. An EBS volume can range from 1GB to 1TB in size, but must be located in the same availability zone as the instance you'd like to attach to.

Because EC2 instances by default don't include a great deal of local storage, it's possible to boot from an EBS volume instead. That way, when you shut down an EC2 instance and want to re-launch it at a later date, it's not just files and application data that persist, but the operating system itself.

Simple Storage Service (S3)

Unlike EBS volumes, which are used to store operating system and application data for use with an EC2 instance, Amazon's Simple Storage Service is where publicly facing data is usually stored instead. In other words, when you upload a new profile picture to Twitter, it's not being stored on an EBS volume, but with S3.

S3 is often used for static content, such as videos, images or music, though virtually anything can be uploaded and stored. Files uploaded to S3 are referred to as objects, which are then stored in buckets. As with EC2, S3 storage is scalable, which means that the only limit on storage is the amount of money you have to pay for it.

Buckets are also stored in regions, and within that region " are redundantly stored on multiple devices across multiple facilities." However, this can cause latency issues if a user in Europe is trying to access files stored in a bucket within the US West region, for example. As a result, Amazon also offers a service called CloudFront, which allows objects to be mirrored across other regions.

While these are the core features that make up Amazon Web Services, this is far from a comprehensive list. For example, on the AWS landing page alone, you'll find things such as DynamoDB, Route53, Elastic Beanstalk, and other features that would take much longer to detail here.

However, if you've ever been confused about how the basics of AWS work-specifically, how computational data and storage is provisioned and scaled-we hope this gives you a better sense of how Amazon's brand of cloud works.

Correction: Initially, we confused regions in AWS with availability zones. As Mhj.work explains in the comments of this article, "availability Zones are actually "discrete" blocks of infrastructure ... at a single geographical location, whereas the geographical units are called Regions. So for example, EU-West is the Region, whilst EU-West-1, EU-West-2, and EU-West-3 are Availability Zones in that Region." We have updated the text to make this point clearer.

Startup Tools: Cloud Services and Tools - Page 9
Startup Tools: Cloud Services and Tools - Page 10
Startup Tools: Cloud Services and Tools - Page 11
Startup Tools: Cloud Services and Tools - Page 12
Startup Tools: Cloud Services and Tools - Page 13
Startup Tools: Cloud Services and Tools - Page 14
Startup Tools: Cloud Services and Tools - Page 15
Startup Tools: Cloud Services and Tools - Page 16
Startup Tools: Cloud Services and Tools - Page 17
Startup Tools: Cloud Services and Tools - Page 18
Startup Tools: Cloud Services and Tools - Page 19

High Memory

Machines for tasks that require more memory relative to virtual cores.

High CPU

Machines for tasks that require more virtual cores relative to memory.

Shared Core

Machines for tasks that don't require a lot of resources but do have to remain online for long periods of time.

Premium OS Pricing

Pricing for premium operating systems differ based on the machine type where the premium operating system image is used. For example, an f1-micro instance will be charged $0.02 per hour for a SUSE image, while an n1-standard-8 instance will be charged $0.11 per hour. All prices for premium operating systems are in addition to charges for using a machine type.

Pricing for premium operating systems are the same worldwide and do not differ based on zones or regions, as machine type prices do.

More details

Network Pricing

Load Balancing and Protocol Forwarding

Persistent Disk Pricing

Local SSD Pricing

Image Storage

IP Address Pricing

Startup Tools: Cloud Services and Tools - Page 21
Startup Tools: Cloud Services and Tools - Page 22
Startup Tools: Cloud Services and Tools - Page 23

Resources

  • Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google BigQuery solves this problem by enabling super-fast, SQL-like queries against append-only tables, using the processing power of Google's infrastructure. Simply move your data into BigQuery and let us handle the hard work.

  • Before you can query your data, you first need to load it into BigQuery. You can bulk load the data by using a job, or stream records individually.

  • Queries are written in BigQuery's SQL dialect. BigQuery supports both synchronous and asynchronous query methods. Both methods are handled by a job, but the "synchronous" method exposes a timeout value that waits until the job has finished before returning.

View all resources

Startup Tools: Cloud Services and Tools - Page 25
Startup Tools: Cloud Services and Tools - Page 26
Startup Tools: Cloud Services and Tools - Page 27
Startup Tools: Cloud Services and Tools - Page 28
Startup Tools: Cloud Services and Tools - Page 29
Startup Tools: Cloud Services and Tools - Page 30
Startup Tools: Cloud Services and Tools - Page 31
Startup Tools: Cloud Services and Tools - Page 32
Startup Tools: Cloud Services and Tools - Page 33

support now available

One click to application awesomeness.

Bitnami makes it incredibly easy to deploy apps with native installers, as virtual machines, or in the cloud.

  • Cisco
  • Siemens
  • Starbucks
  • Hitachi
  • Ebay

I love the Bitnami guys. They make it dead simple for anybody to run an application in the cloud.

- Dr. Werner Vogels, Amazon VP & CTO

  • The most popular open source apps

    90+ apps to choose from

    Bitnami provides the latest versions of your favorite applications and development stacks, tested and optimized for the deployment environment of your choice. Choose from Wordpress, Redmine, SugarCRM, Alfresco, Drupal, MediaWiki, GitLab and way more.

  • Cloud or local: deploy anywhere

    Run anywhere

  • Fully configured and ready to run

    Deploy in one click

    With one click, you can deploy any app or dev stack to any environment. Our images deploy consistently every time. All Bitnami apps and dev stacks have been pre-integrated and configured so that you can become productive immediately.

  • When you're ready to go big

    Bitnami Cloud Hosting

Latest blog entries

  1. We recently released new versions of Ruby stacks that fix several security issues. An additional fix for DoS vulnerability CVE-2014-8090 has been released for all Ruby versions. We...

  2. Moodle, the popular Open Source e-learning platform, released their version 2.8.0 a couple of days ago. We are glad to announce that this version is already available in Bitnami. You can find...

  3. We are happy to announce that Mahara is now available on the Bitnami library. Mahara is an open source ePortfolio and social networking web application created by the government of New...

Startup Tools: Cloud Services and Tools - Page 35
Startup Tools: Cloud Services and Tools - Page 36
Startup Tools: Cloud Services and Tools - Page 37
Startup Tools: Cloud Services and Tools - Page 38
Startup Tools: Cloud Services and Tools - Page 39
Startup Tools: Cloud Services and Tools - Page 40
Startup Tools: Cloud Services and Tools - Page 41

Next in

Next in