1 year ago
#361737
Kideok Kim
Latency is increased when Envoy is attached to the Elasticsearch Coordinate node
I'm stuck in the latency issue with Envoy when it is applied to our service.
Here is a simple architecture.
AS-IS: Client <-> ALB:TargetGroup(port:1111) <-> Coordinate (ES)
TO-BE: Client <-> ALB:TargetGroup(port:2222) <-> Envoy <-> Coordinate (ES)
Envoy is running on the same instance with Coordinate as a sidecar.
Envoy has a duration log in access log and I couldn't find timeout or latency increased from there. Thus, it seems Envoy itself doesn't make any latency issue. But, also I couldn't find latency increment from ALB TargetGroup response time metric.
Also, I've tried below but same result.
- dns_refresh_rate: 3600000
- remove lua filter
- generate_request_id: false
- circuit_breakers: thresholds: - priority: "DEFAULT" max_connections: 2147483647 max_requests: 2147483647 max_pending_requests: 2147483647
ALB response time metric (left: TOBE , right: ASIS):
envoy.yaml:
!ignore dynamic_sockets:
- &admin_address { address: 0.0.0.0, protocol: TCP, port_value: 9901 }
- &elasticsearch_proxy_address { address: 0.0.0.0, protocol: TCP, port_value: 9100 }
# Use 'address: host.docker.internal' for local test
- &elasticsearch_address { address: 127.0.0.1, protocol: TCP, port_value: 9200 }
admin:
address:
socket_address: *admin_address
# The static_resources contain everything that is configured
# statically when Envoy starts, as opposed to dynamically at runtime.
static_resources:
listeners:
- name: elasticsearch_proxy
address:
socket_address: *elasticsearch_proxy_address
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
request_timeout: 2s
generate_request_id: false
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /services/logs/envoy/access.log
log_format:
# https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage
json_format:
authority: "%REQ(:AUTHORITY)%"
req_method: "%REQ(:METHOD)%"
req_path: "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%"
req_body: "%DYNAMIC_METADATA(envoy.filters.http.lua:request.info:request_body)%"
downstream_addr: "%DOWNSTREAM_REMOTE_ADDRESS%"
start_time: "%START_TIME%"
bytes_sent: "%BYTES_SENT%"
res_code: "%RESPONSE_CODE%"
elapsed: "%DURATION%"
route_config:
name: local_router
virtual_hosts:
- name: local
domains: [ "*" ]
routes:
- match: { prefix: "/" }
route:
cluster: elasticsearch
http_filters:
- name: envoy.filters.http.lua
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
local filter = require "lib.filter"
function envoy_on_request(request_handle)
filter.handle_request(request_handle)
end
- name: envoy.filters.http.router
clusters:
- name: elasticsearch
connect_timeout: 2s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
dns_lookup_family: V4_ONLY
circuit_breakers:
thresholds:
- priority: "DEFAULT"
max_connections: 2147483647
max_requests: 2147483647
max_pending_requests: 2147483647
load_assignment:
cluster_name: elasticsearch
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address: *elasticsearch_address
lua.filter
M = {}
-- Stream handle API of Lua HTTP filter
-- @doc https://www.envoyproxy.io/docs/envoy/v1.7.0/configuration/http_filters/lua_filter#config-http-filters-lua-stream-handle-api
function M.handle_request(request)
local t = {}
for chunk in request:bodyChunks() do
local len = chunk:length()
local str = chunk:getBytes(0, len)
str = str:gsub('\n', '')
str = str:gsub('%s', '')
table.insert(t, str)
end
local request_body = table.concat(t, '')
-- https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/lua_filter#dynamic-metadata-object-api
request:streamInfo():dynamicMetadata():set("envoy.filters.http.lua",
"request.info", { request_body = request_body }
)
end
return M
What else I can do to find the root cause of latency increment?
elasticsearch
latency
envoyproxy
aws-application-load-balancer
0 Answers
Your Answer