Web App for Containers + MySQLでコンテナ対応したRailsアプリを作ろう!Yoichi Kawasaki
Web App for Containers は、アプリスタックのホストに Docker コンテナーを使用するため皆さんが今Linux上で利用しているOSSベースのアプリもアプリスタックごとDockerコンテナ化することでそのまま Web App for Containersで利用することができます。本ウェビナーでは簡単なMySQL + Ruby on Rails アプリ を題材に、アプリをコンテナ化し Web App for Containersにデプロイするまでの一連の流れを解説し、CIツールを使った継続的なデプロイ方法についてご紹介します。今回、AzureのフルマネージドMySQLサービスであるAzure DB for MySQLを利用して完全マネージドな環境でのアプリ実行を実現します。
Apache Arrow 1.0 - A cross-language development platform for in-memory dataKouhei Sutou
Apache Arrow is a cross-language development platform for in-memory data. You can use Apache Arrow to process large data effectively in Python and other languages such as R. Apache Arrow is the future of data processing. Apache Arrow 1.0, the first major version, was released at 2020-07-24. It's a good time to know Apache Arrow and start using it.
The document presents a "Technology Tree" that outlines different technology options across five categories - capturing, conveying, creating, cradling, and communicating information. It provides examples of specific technologies that fall under each category and questions to consider to determine the appropriate technologies based on needs and constraints. The tree is meant to help systematically choose the necessary technologies by moving through the branches in a specified order.
This document summarizes a research article that explores the impact of family court processes on stepparents. The researchers interviewed 12 stepparents who were in established relationships with parents involved in family court litigation. The primary themes identified were that stepparents felt excluded from and invisible in the family court system over which they had little control. The findings suggest that the negative psychological impacts on stepparents would be lessened if policies of inclusion were adopted in family court proceedings.
Web App for Containers + MySQLでコンテナ対応したRailsアプリを作ろう!Yoichi Kawasaki
Web App for Containers は、アプリスタックのホストに Docker コンテナーを使用するため皆さんが今Linux上で利用しているOSSベースのアプリもアプリスタックごとDockerコンテナ化することでそのまま Web App for Containersで利用することができます。本ウェビナーでは簡単なMySQL + Ruby on Rails アプリ を題材に、アプリをコンテナ化し Web App for Containersにデプロイするまでの一連の流れを解説し、CIツールを使った継続的なデプロイ方法についてご紹介します。今回、AzureのフルマネージドMySQLサービスであるAzure DB for MySQLを利用して完全マネージドな環境でのアプリ実行を実現します。
Apache Arrow 1.0 - A cross-language development platform for in-memory dataKouhei Sutou
Apache Arrow is a cross-language development platform for in-memory data. You can use Apache Arrow to process large data effectively in Python and other languages such as R. Apache Arrow is the future of data processing. Apache Arrow 1.0, the first major version, was released at 2020-07-24. It's a good time to know Apache Arrow and start using it.
The document presents a "Technology Tree" that outlines different technology options across five categories - capturing, conveying, creating, cradling, and communicating information. It provides examples of specific technologies that fall under each category and questions to consider to determine the appropriate technologies based on needs and constraints. The tree is meant to help systematically choose the necessary technologies by moving through the branches in a specified order.
This document summarizes a research article that explores the impact of family court processes on stepparents. The researchers interviewed 12 stepparents who were in established relationships with parents involved in family court litigation. The primary themes identified were that stepparents felt excluded from and invisible in the family court system over which they had little control. The findings suggest that the negative psychological impacts on stepparents would be lessened if policies of inclusion were adopted in family court proceedings.
This document provides information about an environmental consulting firm called SAMBITO. It lists their contact information in Quito, Ecuador and describes the various services they offer, including:
- Environmental impact studies and audits
- Consulting for agro-industrial and industrial clients
- Representing eco-friendly brands and products
- Designing ecological projects focused on sustainable development
- Integrated waste management programs and consultancy for every stage of the solid waste cycle
A Keynote Speech delivered at Jinnah University for Women on improving the current educational scenario of Pakistan by initiating a self sustaining teacher training program.
The document summarizes the steps to upgrade an Oracle VM (OVM) 2.2 server and manager to OVM 3.0.1. It involves installing a new OVM 3.0.1 manager on a Linux system using VirtualBox. Then installing new OVM 3.0.1 servers and importing existing virtual servers, templates, and resources from the 2.2 environment. It provides details on the manager and server installation and configuration, including network settings and access URLs for the new OVM 3.0.1 environment.
Исследование потребительских расходов "Новый год и Рождество 2015". Компания Deloitte.
В исследовании:
- Экономика и благосостояние россиян
- Структура новогоднего бюджета россиян
- Топ-10 наиболее желанных подарков
- Когда и где россияне будут покупать подарки
- Интернет против традиционных магазинов
This document contains the code for several stored procedures used in a library database. The procedures handle tasks like adding new adult and juvenile members, checking books in and out, getting member and book information, and updating member expiration dates. Validation checks are performed on parameters and data is inserted, updated, and selected from relevant tables within transactions.
Poletne novosti z druzbenih omrezij_Marketing Magazin_sep2016_st.423_str.54-56Urska Saletinger
Pregled sprememb in novosti, ki so se zgodile to poletje na druzabnih omrezjih.
Uporabljeni viri zbrani na: http://bit.ly/2buAmPY
An Overview Of World Cat Navigator SlidesSue Bennett
This document provides an overview of WorldCat Navigator, a system that allows libraries to share their collections. It explains that Navigator creates a single catalog for a consortium of libraries, allowing users to search all collections at once and request items from any library. The workflow for item requests is described, showing how Navigator streamlines the process compared to traditional interlibrary loan. Implementation of Navigator involves preparation, configuration, testing, training, and a go-live phase. Overall, Navigator aims to give users broader access to materials while making resource sharing more efficient for libraries.
The document appears to be a presentation from the Developers Summit 2019 hosted by DENSO Corporation. It discusses DENSO's initiatives in IT and digital innovation. The presentation was given by Yoshiei Sato and Susumu Tomita from DENSO's Digital Innovation, Engineering Research & Development department. The document contains technical details and diagrams related to software development, data processing, and connected vehicle technologies.
This document discusses Amazon S3 and Glacier storage services. It provides an overview of S3 and Glacier, including how they are used to store and retrieve objects, their scalability and availability features, and pricing and billing models. The document also compares S3 and Glacier and how they are suited for different storage needs based on access frequency and cost.
This document summarizes a presentation on machine learning given by Masaki Samejima at the 2019 Developers Summit. The presentation covered topics including computer vision models and frameworks, model serving, AutoML, and hardware for machine learning. Key frameworks discussed were MXNet, Gluon, PyTorch, TensorFlow and ONNX. The document also provided examples of computer vision tasks like classification, detection and segmentation as well as generative models.
This document discusses gumi's infrastructure and services. It describes moving from 20 app servers to 90, scaling out Aurora from 3 to 11 instances, and increasing Redis instances from 1 to 14. The document also outlines gumi's approach to using AWS services like S3, CloudFront, Aurora, and Redis across public, private and management network segments.
5. App Engine のこれまで 3 年 - 進化を続けるプラットフォーム
Apr 2008 Python launch
May 2008 Memcache API, Images API
Jul 2008 Logs export
Aug 2008 Batch write/delete
Oct 2008 HTTPS support
Dec 2008 Status dashboard, quota details
Feb 2009 Billing, Remote API, Larger HTTP request/response size limits (10MB)
Apr 2009 Java launch, Bulkloader (DB import), Cron jobs, SDC
May 2009 Key-only queries, Quota API
Jun 2009 Task queue API, Django 1.0 support
Sep 2009 XMPP API, Remote API shell, Django 1.1 support
Oct 2009 Incoming email
Dec 2009 Blobstore API
Feb 2010 Datastore cursors, Async URLfetch, App stats
Mar 2010 Denial-of-Service filtering, eventual consistency support
May 2010 OpenID, OAuth, App Engine for Business, new bulkloader
Aug 2010 Namespaces, increased quotas, high perf image serving
Oct 2010 Instances console, datastore admin & bulk entity deletes
Dec 2010 Channel API, 10-minute tasks & cron jobs, AlwaysOn & Warmup
Jan 2011 High Replication datastore, entity copy b/w apps, 10-minute URLfetch
Feb 2011 Improved XMPP and Task Queue, Django 1.2 support
6. ロードマップ
SSL access on non-appspot.com domains
Full-text Search over Datastore
Support for Python 2.7
Background servers capable of running for longer than 30s
Support for running MapReduce jobs across App Engine datasets
Bulk Datastore Import and Export tool
Improved monitoring and alerting of application serving
Logging system improvements to remove limits on size and storage
Raise HTTP request and response size limits
Integration with Google Storage for Developers
Programmatic Blob creation in Blobstore
Quota and presence improvements for Channel API
8. App Engine for Business とは?
App Engine プラットフォームに加え
99.9% SLA
クラウド SQL
有償サポート
ドメインコンソール
Hosted SSL
9. 本あります
プログラミング Google App Engine
http://www.oreilly.co.jp/books/9784873114750/
オープンソース徹底活用 Slim3 on Google App Engine for Java
ISBN-10: 4798026999
11. Datastore の設計
非正規化 を嫌がらない - No join
必要な index のみ作成
最小限の Entity Group
トランザクションが必要な箇所のみ
可能なら速い方を使う
query より keys_only query が速い
query より get が速い
複数の get より batch get が速い
kind の分割が有効なケース
一部のプロパティだけ取得
少しの失敗を受け入れる
リトライ、deadline の指定
12. Datastore の設計
非正規化 を嫌がらない - No join
必要な index のみ作成
最小限の Entity Group
トランザクションが必要な箇所のみ
可能なら速い方を使う
query より keys_only query が速い
query より get が速い
複数の get より batch get が速い
kind の分割が有効なケース
一部のプロパティだけ取得
少しの失敗を受け入れる
リトライ、deadline の指定
13. Datastore の設計 - 非正規化
Before
from google.appengine.ext import db
class User(db.Model):
name = db.StringProperty()
groups = db.ListProperty(db.Key)
class Group(db.Model):
name = db.StringProperty()
user = User.get(user_key)
group_names = [group.name for group in db.get(user.groups)]
14. Datastore の設計 - 非正規化
After
from google.appengine.ext import db
class User(db.Model):
name = db.StringProperty()
groups = db.ListProperty(db.Key)
group_names = db.StringListProperty()
class Group(db.Model):
name = db.StringProperty()
user = User.get(user_key)
# group_names = user.group_names
15. Datastore の設計
非正規化 を嫌がらない - No join
必要な index のみ作成
最小限の Entity Group
トランザクションが必要な箇所のみ
可能なら速い方を使う
query より keys_only query が速い
query より get が速い
複数の get より batch get が速い
kind の分割が有効なケース
一部のプロパティだけ取得
少しの失敗を受け入れる
リトライ、deadline の指定
16. Datastore の設計 - 必要な index のみ作成
from google.appengine.ext import db
class MyModel(db.Model):
name = db.StringProperty()
total = db.IntegerProperty(indexed=False)
index が増えると書き込み速度は遅くなります
17. Datastore の設計
非正規化 を嫌がらない - No join
必要な index のみ作成
最小限の Entity Group
トランザクションが必要な箇所のみ
可能なら速い方を使う
query より keys_only query が速い
query より get が速い
複数の get より batch get が速い
kind の分割が有効なケース
一部のプロパティだけ取得
少しの失敗を受け入れる
リトライ、deadline の指定
18. Datastore の設計 - Entity Group
Entity Group 作成の方法
class MyModel(db.Model):
# ...
# 単純にエンティティを作成すると、それ自体新しい Entity Group になる
my_entity = MyModel()
my_entity.put()
# 親を指定すると、親と同じ Entity Group に属する
my_second_entity = MyModel(parent=my_entity)
my_second_entity.put()
# 同じ kind である必要はない
my_third_entity = MyOtherModel(parent=my_second_entity)
my_third_entity.put()
19. Datastore の設計 - Entity Group
Entity Group 作成の方法
class MyModel(db.Model):
# ...
# 単純にエンティティを作成すると、それ自体新しい Entity Group になる
my_entity = MyModel()
my_entity.put()
# 親を指定すると、親と同じ Entity Group に属する
my_second_entity = MyModel(parent=my_entity)
my_second_entity.put()
# 同じ kind である必要はない
my_third_entity = MyOtherModel(parent=my_second_entity)
my_third_entity.put()
20. Datastore の設計 - Entity Group
親子関係全てに Entity Group を使用するべきではない
例えば BlogEntry と Comment には不適切
基本的にはトランザクションが必要な箇所に使う
検索用インデックスを自前で作る時などにも有効
21. Datastore の設計 - Entity Group の使用例
検索用インデックス - Before
class Sentence(db.Model):
body = db.TextProperty()
indexes = db.StringListProperty()
query = Sentence.all().filter(
"indexes =", search_word
)
search_result = query.fetch(20)
fetch 時に不必要な indexes のデシリアライズが発生
22. Datastore の設計 - Entity Group の使用例
検索用インデックス - After
class Sentence(db.Model):
body = db.TextProperty()
class SearchIndex(db.Model):
indexes = db.StringListProperty()
query = SearchIndex.all(keys_only=True).
filter("indexes =", search_word)
search_result = db.get(
[key.parent() for key in query.fetch(20)])
保存時に Entity Group を形成
23. Datastore の設計
非正規化 を嫌がらない - No join
必要な index のみ作成
最小限の Entity Group
トランザクションが必要な箇所のみ
可能なら速い方を使う
query より keys_only query が速い
query より get が速い
複数の get より batch get が速い
kind の分割が有効なケース
一部のプロパティだけ取得
少しの失敗を受け入れる
リトライ、deadline の指定
24. Datastore の設計
非正規化 を嫌がらない - No join
必要な index のみ作成
最小限の Entity Group
トランザクションが必要な箇所のみ
可能なら速い方を使う
query より keys_only query が速い
query より get が速い
複数の get より batch get が速い
kind の分割が有効なケース
一部のプロパティだけ取得
少しの失敗を受け入れる
リトライ、deadline の指定
31. Datastore Contention をさける
Entity または Entity Group に対しての書き込み
目安: 1 秒に 1 度程度
対策:
Entity Group はなるべく小さくする
シャーディングカウンターなどのテクニックを使用する
32. シャーディングカウンター - シンプルな実装例
from google.appengine.ext import db
import random http://goo.gl/8dGO
class SimpleCounterShard(db.Model):
"""Shards for the counter"""
count = db.IntegerProperty(required=True, default=0)
NUM_SHARDS = 20
def get_count():
"""Retrieve the value for a given sharded counter."""
total = 0
for counter in SimpleCounterShard.all():
total += counter.count
return total
def increment():
"""Increment the value for a given sharded counter."""
def txn():
index = random.randint(0, NUM_SHARDS - 1)
shard_name = "shard" + str(index)
counter = SimpleCounterShard.get_by_key_name(shard_name)
if counter is None:
counter = SimpleCounterShard(key_name=shard_name)
counter.count += 1
counter.put()
db.run_in_transaction(txn)
33. Fork-join queue
Building high-throughput data pipelines with Google App Engine
http://goo.gl/ntlH
シンプルなカウンターの例
http://paste.shehas.net/show/137/